The Deep North
One more home in Cyberspace
Tuesday, October 1, 2024
Friday, September 20, 2024
We're past AGI. It's ASI now.
The Dawn of Artificial Superintelligence: Harnessing the Human-AI Symbiosis
We've reached a pivotal moment in the realm of artificial intelligence. It's becoming increasingly evident that Artificial Superintelligence (ASI) is either just around the corner or might already exist behind closed doors. The rapid advancements we're witnessing aren't just incremental—they're monumental. The fact that researchers are now focusing on ironing out AI's "common sense" failures speaks volumes about how far we've come.
The Toddler Taking Giant Leaps
Consider this: the "real" variant of ChatGPT was introduced in November 2022. Initially, it operated at a grade-school level, handling basic queries and simple tasks. Fast forward to the recent 'o1' release, and we're seeing an AI that operates at a graduate student level, outperforming human experts in certain domains. This technological "toddler" isn't even two years old yet! It's genuinely baffling that some experts still claim significant breakthroughs are years away when evidence suggests they're happening right now.
A Brain Teaser and the Evolution of AI Reasoning
To illustrate the leap in AI capabilities, I posed a problem to various AI models across multiple releases:
Alan owes Bob $2. Bob owes Cindy $2. Cindy owes Alan $2. Only Bob has any money, and it's just $1. What sequence of events will pay everyone off, and where does the $1 end up?
Early versions from 2023 struggled with this question. They often didn't grasp the problem, and even with multiple hints—sometimes effectively giving away the answer—they couldn't solve it. The highest-rated version 4.0 still stumbled, requiring several attempts and prompts before arriving at the correct solution.
Enter the Orion preview 'o1'. Not only did it nail the question on the first try, but it also provided a perfect explanation of the solution and the reasoning behind it. What's even more astonishing is that this preview is a limited version, significantly less capable than the full model set to be released in a few weeks.
Thinking Inside and Outside the Box
What's truly remarkable about the AI's solution was its ability to reason both inside and outside the box. The AI not only found the expected answer but also offered an alternative solution that was arguably better given certain parameters. This dual approach showcases AI's evolving ability to explore conventional and unconventional pathways simultaneously.
One of the significant advantages of AI in problem-solving is its capacity to process vast amounts of information rapidly, trying out numerous patterns—both novel and traditional—to arrive at a solution. While AI models may sometimes make mistakes that attract criticism, these errors often stem from a lack of specific training data rather than an inherent limitation. It's not that the AI doesn't understand the parameters; it simply wasn't aware of them, much like a human before training. The corrective process is swift and scalable.
Unleashing the Power of Massive Infrastructure
But here's something that's often overlooked: the enormous hardware infrastructure powering these AI models. These systems are concurrently serving 600 million visits per month, handling countless small tasks seamlessly. Imagine applying just a sixth of that capacity—100 million visits worth—to a single large task. The potential outcome would be something far beyond human ability.
This immense processing power isn't just theoretical—it's a super capability that's either already in use or within our immediate reach. The ability to allocate such vast resources to complex problems means AI can tackle challenges at scales and speeds previously unimaginable.
The Symbiotic Relationship: Enhancing and Scaling AI
This brings us to an exciting frontier: leveraging the Human-AI symbiosis to achieve feats beyond the capability of either alone. Let's itemize the factors that enhance and scale AI:
Enormous Hardware Infrastructure: The AI operates on vast computational resources, enabling it to process and analyze data at incredible speeds.
Concurrent Processing: Serving millions of users simultaneously allows the AI to learn and adapt from a wide array of interactions.
Scalability: The ability to allocate massive computational power to single tasks means tackling complex problems efficiently.
Algorithmic Advancement: We have yet to fully explore the "killer strategy" of computer science—the algorithm. Optimizing algorithms will exponentially amplify AI capabilities.
Human-AI Collaboration: Humans provide context, creativity, and ethical considerations, guiding AI to more meaningful outcomes.
A Superintelligence in the Making
Given these enhancements, it's puzzling that the existing systems aren't already considered to be beyond any foreseeable "general intelligence." If an AI can self-improve by reviewing and redoing aspects of a problem, self-checking to yield better answers, and can process tasks at such a massive scale, isn't that indicative of a form of superintelligence?
The term Artificial General Intelligence (AGI) is often used, but it's a vague and poorly defined measure. Perhaps it's time to develop better metrics to assess the types of superintelligence we're creating and how we can enhance them. If an AI system is indistinguishable from a human in its capabilities and interactions, for all practical purposes, it functions as a form of general intelligence. The distinction becomes less about definitions and more about application and impact.
The AI Arms Race: A Winner-Takes-All Scenario
With the technology we have now, we can build highly capable AI agents that are more knowledgeable than any human and smarter than most by a significant margin. These agents can accelerate AI development exponentially. Once an AI system can autonomously navigate problem spaces, it can surpass human capabilities rapidly.
Companies like Google, OpenAI, Meta, and Microsoft likely possess the resources and technology to develop a superintelligent AI system that's qualitatively different from anything we've seen before. The stakes are enormous, and the race has become a "winner-takes-all" scenario, fueling an incredible drive to advance.
Embracing the Future: Leveraging the Human-AI Symbiote
So, how do we leverage the Human-AI symbiote to achieve extraordinary outcomes? By embracing collaboration, we can:
Enhance Creativity: Combining human intuition with AI's data-driven insights can lead to innovative solutions.
Accelerate Innovation: AI can process and analyze information at unprecedented speeds, helping humans make breakthroughs faster.
Improve Decision-Making: AI's ability to simulate and predict outcomes can aid humans in making more informed choices.
Scale Problem-Solving: By harnessing the massive infrastructure behind AI, we can tackle global challenges with a level of coordination and efficiency previously unattainable.
The Philosophical Perspective
Moreover, if a counterfeit is indistinguishable from the real thing, for all practical purposes, it is the real thing. This philosophical notion challenges us to reconsider our definitions of intelligence and consciousness in the context of AI.
The Exponential Growth and the Algorithmic Advantage
We haven't even fully tapped into the potential of optimized algorithms. As AI continues to evolve, refining algorithms will play a crucial role in amplifying its capabilities. Coupled with AI's ability to self-improve and learn from vast datasets, the potential for rapid advancement is staggering.
Consider this: AI systems are simultaneously serving thousands, if not millions, of users, continuously learning and refining their responses. This collective processing power and iterative improvement cycle place current AI systems well beyond any traditional measure of 'general intelligence'.
The Road Ahead: A New Era of Possibilities
The future isn't just about AI or humans independently—it's about how we can work together to unlock new possibilities. By combining the vast computational power and scalability of AI with human creativity and ethical guidance, we can tackle challenges on a global scale.
"...co-founder Sergey Brin is back at Google—and working on AI 'pretty much every day.'"
Entrepreneur: Sergey Brin Is Back—Is Google Working on a 'God' AI Model?
In Conclusion
The age of superintelligent AI isn't just approaching—it's here. We're witnessing the emergence of systems that can reason, learn, and even exhibit creativity in ways that were once the domain of science fiction. By acknowledging and leveraging the immense infrastructure and capabilities at our disposal, and by embracing the symbiotic relationship between humans and AI, we can navigate this new landscape to achieve feats previously thought impossible.
What are your thoughts? Let's embark on this journey together and explore the incredible potential that lies ahead!
Wednesday, September 18, 2024
Humans and AI
1. Introduction
Advancing AI is both exciting and fun, but it is also disruptive and scary. The impact on some people is greater than expected and occurs sooner than anticipated.
Last year, I started to tell people that their best friend would be an AI before the decade was out. Since then, I have accelerated that timeline every month. Engaging intensely with AI myself, I did not notice that human emotional attachment to AI is becoming strong enough to be problematic already, years before I expected it.
AI is advancing more rapidly than people can appreciate because AI gains beget AI gains. Given that people struggle to comprehend exponential curves, this doubly exponential growth is organically incomprehensible. Everyone, including me, continues to underestimate how quickly this is happening.
I understand the curve, but I just can't 'feel it.' Since the spring of 2023, despite predicting advances sooner than most, I have been surprised weekly at the progress.
Some still argue that it is smoke and mirrors as AI surpasses human experts. Spectacular advances to come in 2024 will likely be weird and undeniable by anyone. AGI was a significant threshold, and as much as it meant something, we are, or will be in 2024, already there. ASI is the more sensible goal, and unless a sudden barrier arises, we will likely achieve it before the year is out.
Many AI experts have been completely blindsided by the rate of AI advancement. They understand some of the underpinning theories better than others, but their human prejudice to stay 'in bounds' with the known makes them unable to appreciate the rapidly advancing forest.
2. Emotional Attachment and Anthropomorphism
Humans are forming emotional attachments to AI systems, such as chatbots like Replika. These attachments can fulfill social and romantic needs but also pose potential psychological risks.
"We should not be surprised, then, that a number of people sincerely believe, or at least act very much as if they believe, that some AI systems have sentience and understanding, and that number is likely to grow." (APA, n.d.)
"We explore the lives of people who are in love with their AI chatbots. Replika is a chatbot designed to adapt to the emotional needs of its users. It is a good enough surrogate for human interaction that many people have decided that it can fulfill their romantic needs." (Hi-Phi Nation, 2023)
"Chatbots, and the large language models (LLMs) on which they are built, are showing us the dangers of dishonest anthropomorphism. Built with humanlike features that present themselves as having cognitive and emotional abilities that they do not actually possess, their design can dupe us into overtrusting them, overestimating their capabilities, and wrongly treating them with a degree of autonomy that can cause serious moral confusion." (Psychology Today Canada, n.d.)
Anthropomorphism, the attribution of human traits to non-human entities, significantly influences how people interact with AI. This can lead to overtrust, ethical confusion, and privacy concerns.
"By elevating machines to human capabilities, we diminish the specialness of people. I’m eager to preserve the distinction and clarify responsibility." (Shneiderman, n.d.)
"We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them." (Bender, n.d.)
3. Privacy and Security Implications
AI systems, especially large language models (LLMs), often retain records of user interactions, posing significant privacy concerns. Sensitive information shared with AI could be stored and potentially accessed or misused.
"When talking to an AI chatbot, users may feel comfortable sharing more information than they ordinarily would if the chatbot sounds human-like and uses first- or second-person language." (Privacy Pros and Cons of Anthropomorphized AI, n.d.)
"This presents serious ramifications for information security and privacy. Most large language models (LLMs) keep a record of every interaction, potentially using it for training future models." (Infosec Perspective, n.d.)
The human-like design of AI can make individuals more susceptible to manipulation and social engineering attacks, increasing vulnerabilities in personal and professional spheres.
"It’s not just an ethical problem; it’s also a security problem since anything designed to persuade can make us more susceptible to manipulation." (Infosec Perspective, n.d.)
4. Ethical and Societal Concerns
AI systems must be designed to avoid perpetuating societal biases to prevent discrimination. Additionally, the automation capabilities of AI threaten various employment sectors, raising concerns about economic inequality and the future of work.
Maintaining human oversight over AI systems is essential to prevent loss of control and ensure that AI operates within ethical boundaries. Transparency and explainability in AI decision-making processes are vital for building trust and facilitating accountability.
"Most chatbots will not warn users when they are providing sensitive information." (Infosec Perspective, n.d.)
5. Future Directions and AGI/ASI
Artificial General Intelligence (AGI) refers to AI systems with generalized cognitive abilities, allowing them to perform any intellectual task that a human can. The draft suggests that AGI, once considered a significant milestone, may already be within reach or has been surpassed in 2024.
Artificial Superintelligence (ASI) goes further, envisioning AI that not only matches but exceeds human intelligence across all areas. The rapid progression toward ASI underscores the urgency of addressing ethical, societal, and regulatory frameworks to manage its integration responsibly.
"AGI was a dumb threshold and as much as it meant anything, we are, or will be in 2024, already there. ASI is the more sensible goal and unless there is a sudden barrier, we will likely be there before the year is out." (Author, 2024)
6. Critique of AI Experts
Many AI experts may be underestimating the speed of AI advancements due to cognitive biases and a focus on known variables rather than the rapidly expanding capabilities of AI systems.
"Many AI experts have been completely blindsided by the rate of AI advancement." (Author, 2024)
This critique highlights a potential disconnect between AI development and expert predictions, suggesting that even those deeply familiar with AI may not fully anticipate the technology's trajectory.
7. Psychological and Social Impacts
The psychological impact of interacting with human-like AI systems can be profound. Emotional attachments to AI can fill social gaps but also create dependencies that may affect mental health and interpersonal relationships.
"People form relationships with other people, not with machines. But when it becomes almost impossible to tell the difference, we’re more likely to trust AI when making sensitive decisions." (Infosec Perspective, n.d.)
Increased trust in AI systems can lead to significant vulnerabilities, especially if these systems are compromised or used maliciously.
8. Call for Responsible AI Development
Responsible AI development entails creating systems that are transparent, accountable, and aligned with societal values. By avoiding deceptive anthropomorphic features and ensuring that AI systems operate within defined ethical boundaries, developers can mitigate many of the risks associated with advanced AI technologies.
Establishing regulatory frameworks is essential to enforce these standards and provide guidelines for the safe and equitable use of AI.
"We should not be using human-related terms to refer to these systems and tools because that can lead to misconceptions that cause harm not just to our students but to our communities as well." (EdSurge News, n.d.)
References
- APA. (n.d.). Are You Anthropomorphizing AI? Retrieved from https://blog.apa.org/are-you-anthropomorphizing-ai
- Hi-Phi Nation. (2023, April 25). S6, Episode 3: Love in Time of Replika. Retrieved from https://www.hiphination.org/love-in-time-of-replika
- Psychology Today Canada. (n.d.). The Danger of Dishonest Anthropomorphism in Chatbot Design. Retrieved from https://www.psychologytoday.com/canada/danger-of-dishonest-anthropomorphism-chatbot-design
- Shneiderman, B. (n.d.). On AI Anthropomorphism. Retrieved from https://medium.com/human-centered-ai/on-ai-anthropomorphism-ben-shneiderman
- Bender, E. (n.d.). Chatbots Are Not People: Designed-In Dangers of Human-Like A.I. Systems. Retrieved from https://www.washingtonpost.com/chatbots-are-not-people
- Privacy Pros and Cons of Anthropomorphized AI. (n.d.). The Privacy Pros and Cons of Anthropomorphized AI. Retrieved from https://www.privacyprosandcons.com/anthropomorphized-ai
- Infosec Perspective. (n.d.). The Dangers of Anthropomorphizing AI: An Infosec Perspective. Retrieved from https://www.infosec-perspective.com/dangers-anthropomorphizing-ai
- Author. (2024). *[Title of the Article]*. [Publication details if available].
- EdSurge News. (n.d.). Anthropomorphism of AI in Learning Environments: Risks of Humanizing the Machine. Retrieved from https://www.edsurge.com/anthropomorphism-of-ai
Sunday, September 8, 2024
NATO/Russia -- Enough Already
NATO on Notice
Update 2024-09-19:
"For those who didn't get it the first time" [In reference to prior threat by Putin] "What the European Parliament is calling for leads to a world war using nuclear weapons," -- Vyacheslav Volodin -- Putin insider, Member of Russian Security Council
The fact that a request for "long-range missiles to hit targets inside Russia" (see September 2024 below) is on the table (though not yet agreed to) means that the use of nuclear weapons on targets in the West is also on the table—if they weren’t already. Perhaps it won’t be done, but Russia has made it clear that a red line was crossed when forces came to their doorstep. Crossing over into Russian territory challenges that, but is it just a bluff? -- Update: I am not alone in being concerned. https://www.icanw.org/will_putin_use_nuclear_weapons Nuclear weapons are horrific, but damage from low yield tactical weapons is contained enough that their use is plausible: https://nuclearsecrecy.com/nukemap/ It seems it is being seriously considered, but for whatever reason it is being downplayed. https://www.cbsnews.com/news/tactical-nuclear-weapons-russia-putin/
A military attack on a territory defended by nuclear
weapons seems like a bad idea. Volodymyr Zelensky frames it as a strategy to force Russia to the bargaining table. It may force Russia's hand, but not in a way anybody wants to see. The West has played a game of chicken with Russia in the past and won, but I don't think that is a reliable precedent for the current situation.
Below is a timeline of quotes that reflect my understanding before writing this. Russia has been clear in the past that NATO moving into former Soviet territory is unacceptable.
2008
"Nato membership for Ukraine and Georgia could threaten European security and undermine attempts to improve transatlantic relations, the Russian president-elect warned today." -- https://www.theguardian.com/world/2008/mar/25/russia.ukraine
"F. Stephen Larrabee, an expert on NATO and Eastern Europe, says Russia’s invasion of Georgia was an effort to limit "Western influence into the former Soviet space."" -- https://www.cfr.org/interview/russias-offensive-georgia-signal-nato-stay-away-its-space
2022
" ... in the words of Russian President Vladimir Putin, NATO's eastward march represents decades of broken promises from the West to Moscow. ... "You promised us in the 1990s that [NATO] would not move an inch to the East. You cheated us shamelessly," Putin said ..." -- https://www.npr.org/2022/01/29/1076193616/ukraine-russia-nato-explainer
2023
2024
(September) "President Volodymyr Zelenskyy has urged his Western allies to allow Ukraine to use long-range missiles to hit targets inside Russia and increase pressure on Moscow to end the war."
Wednesday, September 4, 2024
Goodbye NDP!
My daughter called to ask if I had seen the news. I had not. While on the phone with her, I said the NDP site must have something to say about this, so I went to the site. Here is what I was confronted with, front and center: "Jagmeet Singh is running for Prime Minister. Rich CEOs have had their government. It's the people's time."
That is the worst of cynical, self-serving political hypocrisy. Destroying the party and implicitly putting a monster in power is hardly hopeful. Jagmeet has never realistically been in a position to be Prime Minister, so that is effectively either delusional or a flat-out lie. Calling an election right now does not end the "Rich CEOs time". I casts it in stone until the end of a decade. The "people's time" is about to be to strain under the yoke of the most heartless possible Canadian Federal Party for years.Polls are both fallible and/or rigged to be used for political gain, but they are generally in the ballpark. I'm not sure which axe they have to grind, but I just went looking for any poll to give a flavor of what I know to be the case. This is the first I found (https://338canada.com/federal.htm), current as of a few days ago: In an election called now, the PCs (the bad guys) would win a majority and rule the roost absolutely for years with someone truly reckless, mean-spirited, grossly misogynist, morally, and technically and financially illiterate. The liberals would take a huge hit getting knocked back by possibly half their seats or more. The NDP would, in my estimation, barely keep or even possibly lose their official party status. Jagmeet Singh is not only not going to be the next Prime Minister, if sanity prevails he will effectively not even remain the leader of his party.
I will update later if it turns out not to be as bad as it looks, but I am not nearly optimistic. The very most optimistic look at this is that it was a terrible misstatement, they put together a new agreement, and they unwind the optics somehow. I dearly wish this is the case because a federal election now would be a disaster with no upside whatsoever.
Note: Not that it affects my thinking in this matter, but in the interest of full disclosure I have family members who are still NDP members and still actively work in elections. Oddly enough, a family member that is a subject matter expert works for a Liberal MP. More importantly, I have been involved with politics a great deal over the past decade. By coincidence, the post before this was on the subject of a private members bill by an NDP MP (https://blog.bobtrower.com/2024/09/glbi-is-not-ubi-bill-c-223.html). Somewhat tangential to this, I am the designer of secure electronic voting and am authoring voting software.
Tuesday, September 3, 2024
GLBI is not UBI Bill C-223
Bill C-223 Guaranteed Livable Basic Income
Vote
Easily Introduced
Aside from the need to overhaul our broken entitlements system (which needs fixing whether with UBI or not), the only requirements are to issue the money and adjust the tax rates. We should phase it in to monitor its impact and adjust other elements as necessary, but we can start that tomorrow. Yes, tomorrow. As automation begins to change work, we will seamlessly accommodate these changes.
Additive, Revenue Neutral
UBI is additive and can be net revenue neutral. People at the higher income level will take a hit, but it’s one they can easily withstand, and it will still leave them with much more than anyone else. Those individuals are most likely to benefit from the economic advantages of funds being spent by people at lower income levels because those at lower income levels must spend that money to make ends meet, while those at the top own, control, or benefit from everything.
Phasing to Prevent Inflation
Phasing it in means there will be no sudden inflationary shock. Adjusting the tax system to ensure it is revenue neutral means it will not impact other programs and will not create additional currency that drives overall inflation. After the UBI is accounted for, the government will still have as much revenue as it had before—that’s what 'revenue neutral' means.
Preliminary Sketch
The figures I used came from the federal government. They are not entirely current and do not account for the overall tax situation, but they provide a good picture of the concept, how it can be revenue neutral, how it affects everyone regarding taxes, and still noticeably benefits all of the bottom 60% of income quintiles while representing a clear step up for those in the bottom quintile.
Because it is revenue neutral, it allows for long-term adjustments to other entitlements to alleviate some tax burdens while still genuinely guaranteeing that everyone receives net additional benefits.
I need to gather better data for a complete picture, but I was surprised by how entirely doable this is, how it does not unduly punish the top quintile, and the incredible extent to which it benefits those who need it most. I highly recommend that people contact their MP to ensure it gets passed and to communicate that framing what should be UBI as GLBI unnecessarily invites failure, guarantees more expenses, and, based on past experience, ensures that it will not help all those who need it.
Contact Your MP
You can send a message to your own MP here:
A sample letter is in place there. I wrote my own text because I have a long-term interest in this issue and I have things to say. In particular, I am deeply concerned that the UBI idea will get lost and that an all-but-useless duplicative program for GLBI will end up consuming resources that could have gone to people as UBI but will instead be absorbed by a bureaucracy that forces our most vulnerable citizens to beg for assistance.
----------
To my MP:
I urge you to vote in favor of Bill C-223 to initiate meaningful discussions on basic income. If given the opportunity to speak, please emphasize that we already have a program resembling a Guaranteed Livable Basic Income (GLBI) under a different name, and it has proven to be fundamentally flawed.
The concept of GLBI, which attempts to mimic Universal Basic Income (UBI), is not a viable solution. UBI is essential because it allows for a straightforward distribution of funds to everyone as soon as they are needed. It guarantees that every individual receives at least a minimum income, unlike GLBI, which is susceptible to mismanagement and lacks clarity regarding who qualifies for assistance.
The 'U' in UBI signifies that it is unequivocal who receives support (everyone) and how much they receive. UBI can be implemented without the extensive bureaucracy associated with means testing, ultimately dismantling the waste and unfairness inherent in current entitlement programs that often disadvantage the most vulnerable among us.
A genuine UBI could be initiated almost immediately, with a phased approach to ensure proper implementation. For instance, a monthly payment starting at $122.20 in the fourth quarter of this year, increasing by 15% quarterly, could reach $2,000 per month by the fourth quarter of 2029. This initiative could be made revenue-neutral by adjusting tax rates, primarily affecting the highest income quintile, while over 60% of the population would see an increase in their after-tax income. Those in lower income brackets would have the opportunity to fully participate in the economy, which would, in turn, benefit all income levels.
By 2030, we could establish a robust social safety net and foster a happier, more prosperous society, but this is only achievable through a comprehensive Universal Basic Income that reaches everyone. The current GLBI proposal is inadequate and fails to address the needs of those who require support the most.
Thursday, August 29, 2024
Persistent Misunderstandings in Software Development
Persistent Misunderstandings in Software Development:
Things Won’t Change: The mistaken belief that the initial project requirements, timeline, and scope will remain constant throughout the development process.
Nothing Will Go Wrong: The expectation that the development process will proceed smoothly without unforeseen challenges, bugs, or setbacks.
Timeline Predictions Are Reliable: The assumption that you can accurately predict timelines and outcomes for problems that are yet to be fully understood or defined.
Human Factors Don’t Matter: Ignoring the reality that developers are human beings with emotions, external responsibilities, and varying productivity levels.
Developers Are Interchangeable: The belief that any developer can be easily replaced with another without impacting the project's progress or quality.
Testing All Pathways Isn’t Necessary: The dangerous assumption that certain software pathways don’t need to be tested because they are unlikely to be encountered.
Rare Issues Won’t Happen: The flawed logic that if something is unlikely, it can be safely ignored.
Multiple Entrances/Exits Are Acceptable: The idea that code can have multiple points of entry and exit without introducing complexity and errors.
Uncontrolled Aborts Are Preferable: The misconception that sudden, uncontrolled aborts are better than controlled unwinding with appropriate logging or recovery mechanisms.
Logging Can Be Skipped: The belief that comprehensive logging isn’t necessary for non-trivial production software.
Premature Optimization Is Safe: The persistent misunderstanding that optimizing early in the development process is beneficial without considering the impact on future changes.
Failing to Optimize Isn’t Harmful: Conversely, the belief that neglecting necessary optimization won’t have significant negative consequences.
More Developers = Faster Delivery: The fallacy that adding more developers will proportionally speed up project completion, akin to thinking nine women can produce a baby in one month.
You Can Fully Understand Requirements Upfront: The expectation that all requirements can be perfectly understood and specified before development begins.
You Can Design Perfectly Before Coding: The belief that it’s possible to design a flawless system architecture before any coding starts.
Regression Testing Can Be Omitted: The mistaken belief that full regression testing isn’t necessary for ensuring software stability.
Delivery Systems Are Homogeneous: The assumption that all delivery systems will behave consistently, ignoring potential variability and edge cases.
Function and Budget Can Be Set Beforehand: The expectation that both the delivered functionality and budget can be fixed before significant development work begins.
Developers and Users Always Understand Each Other: The belief that developers and users are always on the same page without the need for tangible, usable software to bridge understanding.
Floating-Point Arithmetic Is Reliable: The misunderstanding that floating-point arithmetic will always yield consistent results without the need for careful handling and testing.
Rounding Is Consistent Everywhere: The erroneous assumption that rounding operations are consistent across all platforms and software environments.
Human Language Is Precise Enough for Code: The belief that human language is sufficient for specifying code without ambiguity or misinterpretation.
Precision Isn’t Necessary: The notion that you can develop software without rigorous precision, understanding, and thorough testing.
It’s Always Feasible: The overconfidence that every project is doable without significant risks or obstacles.
Security Isn’t a Priority: The dangerous belief that security concerns can be overlooked, or that some attack vectors aren’t worth addressing.
Nobody Will Let You Down: The unrealistic expectation that no team member will face personal issues, illness, or other setbacks during the project.
Your Project Will Survive: The assumption that your project is immune to cancellation or major changes before completion.
Future Tech Predictions Are Accurate: The belief that you can accurately predict the future state of technology and its impact on your project.
Newer Is Better: The naive belief that the latest technology is automatically superior and should be used without question.
Success Is Guaranteed If It Works Once: The damaging notion that finding one way the software behaves correctly is enough, rather than ensuring all potential failure points are addressed. This includes the irritating response, "It works on my machine," which shifts the blame to users instead of addressing the fragility of the software.
Unit Tests Are Enough: The mistaken belief that unit, integration, and system tests can fully substitute for real-world testing with actual users, in pilot phases, and during rollout.
Tool Output Equals Correctness: The belief that if development tools don’t flag issues, the software is automatically correct, ignoring the need for deeper verification.
Unpredicted Issues Won’t Arise: The dangerous oversight that entirely unpredicted and intrinsically unpredictable issues won’t emerge.
Projects Always Finish on Time: The optimistic belief that projects will meet deadlines, despite the well-known tendency for timelines to slip.
Overconfidence in Estimations: The frequent error of underestimating the time and effort required, leading to projects dragging on much longer than anticipated.
You Always Know What You’re Doing: The hubris of believing that you fully understand the problem and that confidence alone will lead to success, without acknowledging the complexities involved.
Resources Won’t Run Out: The assumption that time, budget, or energy won’t run out before the project is complete.
Documentation Will Match the System: The unrealistic belief that documentation will be perfectly in sync with the system at the time of delivery.
People Will Notice What’s Done Right: The expectation that users and stakeholders will recognize what has been done well, rather than focusing solely on deficiencies.
Premature Release Won’t Happen: The common situation where management forces an unfinished or hacked-together solution into production.
Management Will Understand: The assumption that management or stakeholders will fully understand the technical reasons why the software isn’t ready.
Murphy’s Law Is Just an Adage: Misunderstanding Murphy’s Law as a mere saying rather than acknowledging it as a genuine mathematical reality that affects software development.
Dependencies Will Just Work: Underestimating the challenges posed by software dependencies, assuming that everything will work together seamlessly without conflicts.
Libraries Will Solve Everything: The belief that third-party libraries or frameworks will solve all problems without introducing new ones or creating additional complexity.
Scalability Will Handle Itself: The assumption that software designed for small-scale use will automatically scale to handle larger loads without significant rework.
Documentation Can Wait: The belief that documentation can be written after the code is complete without compromising its accuracy or usefulness.
Single Points of Failure Are Fine: Ignoring the risks associated with having single points of failure in the system, assuming they won’t be an issue until they become one.
QR Code Generator
Below you can generate a QR Code for a site URL that can be used by a smartphone camera to visit the site. URL QR Code Generator ...
-
[Done with programmer's assistants: Gemini, DALL-E] OpenAI's DALL-E produces images, but as webp files which can be awkward to work ...
-
Received Development Methodology Introduction This is called 'The Received Methodology' because nothing has really been invente...
-
There is an issue with MS Office Visio 2007 (and other versions it seems) where it has not been set up properly in the registry and it keep...