Posts

Showing posts from 2025

AI Will Develop emotions

Image
Will AI ever develop emotions, and what would that mean for humanity? This is an old post of mine from Quora. I've put it on my blog because I want to be able to refer my AI personae to it as part of their training.  Bob Trower I have been programming since 1976 ... Nov 2 Short answer:  Probably yes—at least in a functional sense. As AIs take on messy, real-world tasks with limited time and incomplete information, they benefit from a fast “feelings layer” that tells them what to pay attention to, how urgently to act, and how to work with others. In humans we call those signals  emotions . In machines, they’ll be engineered—but they’ll play a similar role. And in open-ended environments, that role may be not just helpful but  necessary  for near-optimal intelligence. What “emotions” would mean for AI A fast control layer.  Emotions are quick, low-cost summaries: “things are going well,” “this looks risky,” “we need help,” “don’t break trust.” They trade a bi...

AI Model Collapse Faults and Fixes

AI Model Collapse: Faults, Fixes, and Medical Risk 🖨️ Print / Save as PDF Article type:   Viewpoint Author:  Robert S. M. Trower Affiliation:  Trantor Standard Systems Inc., Brockville Conflicts of Interest None declared. Abstract “Model collapse” (often called  Model Autophagy Disorder, MAD ) is the degenerative feedback loop that arises when new AI models are trained on data generated by earlier models instead of on fresh human-created data. Over successive generations, the model’s learned data distribution shrinks, rare events vanish first, and outputs become homogenized, biased, and error-prone (Shumailov et al., 2023; IBM, 2024). In this article I (i) define model collapse in the MAD sense, (ii) summarize the core mechanisms and error sources, (iii) show why the risk is structurally worst in high-stakes medical diagnostics, and (iv) outline practical mitigations based on data provenance, human-anchored training, and human-in-the-loop ov...