Sunday, May 26, 2024

Can AI replace human software developers?

Can AI replace human software developers?Yes. Not today, but soon. I have been a software developer longer than most people talking about this have been alive. Most developers are weak enough that much of their activity will be better done by AI automation.

One of the things holding up AI in development is that the human-created code they have trained on is terrible. They have learned to reproduce sloppy, bug-inviting code because that is mostly what is available out there. That will change.

Lots of jobs exist because they are required to support fragile, poorly written systems. As those get replaced, the jobs surrounding them will disappear. That’s good news, though. Maintaining and patching together buggy legacy code is a horrible job I would not wish on anybody.

Depending on the environment, I would say anywhere from 10% to 50% or more of developers will be replaceable in about five years, probably sooner. Maybe the top 10% of developers will still be needed because they have a high level of knowledge and skill, and they are not ‘one-trick ponies’. They are very smart, highly literate, adaptable, and good to work with.

You can’t judge ‘replaceability’ on the basis of the really capable developers. Some may take quite a long time to replace because they embody a great deal of intelligence and knowledge that only exists with a small number of hard-core developers. However, ordinary folk, even if they don’t lose their employment, will find lots of their tasks taken over by automation.

Consider this: if an overall AI development system finds a fixable issue in a codebase, it can survey everything else to find and fix that issue. If it finds a ‘best way’ to replace things, it can make that assessment and quickly change everything over to that way. If, for some reason, one of its changes needs to be rolled back, it will roll it back because failing to account for that is the kind of mistake it does not make. Lots of bugs that developers spend time fussing with are because of mistakes made by the original developers that an AI would find easy to identify and correct. Lots of needed code already exists. Even a top developer is not guaranteed to look for and find acceptable code for things he needs, and they subsequently create the code on their own. An AI system in the next few years will be pretty much aware of every bit of code available, what it does, and how to use it. Using it often depends upon setting up an elaborate environment. I have been amazed at how ChatGPT can already make sense of what I want and how to do it, and when my environment has an issue, it can tell me more about the issue and how to correct it. That includes the kind of fussy stuff that can send even a seasoned programmer down rabbit holes.

AI will soon be able to do a lot of the routine stuff developers do. To some extent, it already can. Going forward, much of the code that human developers might write will already be written, known, and available. The work developing that will not be needed because it will already be done.

For most of us, developers included, the net impact of AI automation will be positive. It will do the tasks we don’t wish or need to do. It will increase our collective net wealth, and at the same time, it will increase our leisure time to take advantage of that wealth. To the extent that humans will be needed in the mix, their powers will be greatly amplified using AI to leverage what they bring to the table.

I must say that this is now coming on fast, and I highly recommend that people begin to become familiar with and skill up on AI systems. ChatGPT, the thing that really opened this all up, is still a great place to start.

Fake Fossils Can't Topple Evolution

Below is an edited copy of a post I made on Quora years ago. 


 

Has the Existence of Fake Fossils Led Us to Many Wrong Conclusions in the Theory of Evolution?

By Bob Trower

The question of whether fake fossils have significantly misled our understanding of evolutionary theory is a complex one, but the answer is essentially "no." To fully appreciate this, it's important to understand both the robustness of evolutionary theory and the motivations behind the attacks on it.

Understanding the Theory of Evolution

The Theory of Evolution, often described as "the most thoroughly authenticated fact in the whole history of science" by anthropologist Ashley Montagu (Montagu, 1984), is a comprehensive and well-supported framework for understanding the diversity of life on Earth. It is not just a single idea but a collection of interconnected concepts supported by an immense volume of evidence from various scientific disciplines, including genetics, paleontology, comparative anatomy, and molecular biology.

Evolutionary theory explains how species adapt and change over time through mechanisms such as natural selection, genetic drift, and gene flow. The fossil record, while a critical component, is just one piece of the puzzle. Other lines of evidence, such as DNA analysis and observed evolutionary changes in living organisms, provide robust support for the theory (Futuyma, 2013).

The Creationist Challenge

Despite the overwhelming evidence, evolutionary theory has faced persistent opposition, particularly from creationist groups. Creationists generally hold a set of beliefs that include the literal interpretation of the Bible, the idea that there is only one absolute truth, and the conviction that evolution is in direct conflict with their religious texts (Scott, 2004). As a result, they feel a duty to proselytize against evolution.

The tactics employed by creationists often involve cherry-picking anomalies or presenting fraudulent evidence to cast doubt on evolution. This strategy is less about seeking scientific truth and more about undermining a theory that contradicts their worldview (Numbers, 2006).

The Impact of Fake Fossils

Fake fossils, while they can cause temporary confusion, do not have the power to derail the entire theory of evolution. Here’s why:

  1. Scientific Scrutiny: The scientific community operates on a basis of rigorous peer review and replication. Any new fossil discovery undergoes extensive analysis and verification by multiple experts. Fake fossils are usually identified and discredited through this process (Prothero, 2007).
  2. Multiple Lines of Evidence: Evolutionary theory is supported by a multitude of independent lines of evidence. Even if a fake fossil were initially accepted, it would eventually be exposed when it failed to align with other evidence from genetics, biogeography, or comparative anatomy (Coyne, 2009).
  3. Predictive Power: One of the strengths of evolutionary theory is its predictive power. It allows scientists to make accurate predictions about the relationships between species, the progression of embryonic development, and the location of future fossil discoveries (Carroll, 2006). A fake fossil would fail to align with these predictions over time.
  4. Self-Correcting Nature of Science: Science is inherently self-correcting. Mistakes, whether intentional (as in the case of fake fossils) or unintentional, are identified and corrected through ongoing research and debate. This iterative process strengthens the reliability of scientific knowledge (Kuhn, 1970).

The Hypothetical Scenario

Even if we were to entertain a hypothetical scenario where the Bible's creation story is accurate, with the Earth being less than ten thousand years old and all species created immutably, the utility of the Theory of Evolution remains undiminished. It would still:

  • Organize and make sense of the living world.
  • Allow predictions about species similarities and relatedness.
  • Aid in determining effective medical treatments across species.
  • Provide insights into embryological development.
  • Guide the search for fossils.
  • Explain and predict animal behavior (Mayr, 2001).

Conclusion

In conclusion, while fake fossils can create temporary confusion and fuel creationist rhetoric, they do not undermine the foundation of evolutionary theory. The theory's strength lies in its extensive and diverse body of supporting evidence, its predictive power, and the self-correcting nature of science. The rigorous scrutiny applied to fossil discoveries ensures that fraudulent evidence is eventually uncovered and discarded. Thus, evolutionary theory remains a robust and indispensable framework for understanding the biological world.

References

Carroll, S. B. (2006). The Making of the Fittest: DNA and the Ultimate Forensic Record of Evolution. W. W. Norton & Company.

Coyne, J. A. (2009). Why Evolution Is True. Viking.

Futuyma, D. J. (2013). Evolution. Sinauer Associates.

Kuhn, T. S. (1970). The Structure of Scientific Revolutions (2nd ed.). University of Chicago Press.

Mayr, E. (2001). What Evolution Is. Basic Books.

Montagu, A. (1984). Science and Creationism. Oxford University Press.

Numbers, R. L. (2006). The Creationists: From Scientific Creationism to Intelligent Design. Harvard University Press.

Prothero, D. R. (2007). Evolution: What the Fossils Say and Why It Matters. Columbia University Press.

Scott, E. C. (2004). Evolution vs. Creationism: An Introduction. Greenwood Press.

Wednesday, May 22, 2024

AI Alignment and Security Now!

As far as I know, people charged with 'Alignment' at AI companies are not convinced things are safe enough and some have quit because of it.**  A recent post about an AI company's safety and alignment moved me to drill down a bit to see how things are going. It's worrisome. Things are not going well. It seems as if companies are saying "Trust us, we are working hard to detect any oil spill, and clean it up when it happens", rather than "we are working hard to ensure an oil spill *cannot happen*"

The Imperative of Robust AI Security and Alignment


In a recent comment to an AI executive, I raised concerns about the apparent lack of sophisticated security measures within AI companies. This is particularly troubling given the potential risks associated with advanced AI systems. Here, I would like to expand on these concerns and suggest mechanisms to ensure AI safety and alignment.

The Uncertainty of Controlling Out-of-Control AI Systems

One of the gravest challenges we face is the uncertainty surrounding our ability to stop an out-of-control AI system. As AI technology advances, the risk of developing systems that surpass human intelligence becomes more palpable. While many companies focus on detecting and mitigating the emergence of superhuman AGI, the truth is we may already be dealing with AI systems that exhibit superhuman capabilities in specific domains.

Mechanisms to Induce Responsibility in AI Firms

To address this challenge, we must consider mechanisms that compel AI firms to prioritize safety and alignment. One of the most viable approaches is to hold these firms liable for any damage their AI systems cause. Specifically, they should be held accountable for damage resulting from grossly negligent or sloppy control practices. This legal liability would incentivize firms to adopt rigorous safety and alignment measures, preventing potential catastrophes.

Key Security and Alignment Measures

  1. Deadman Switches: These are automatic fail-safes designed to disable an AI system in the event of a breakdown in oversight and control. A deadman switch ensures that if human operators lose the ability to manage the AI, the system will automatically shut down or enter a safe mode, preventing unintended actions.

  2. Separated PKI: Public Key Infrastructure (PKI) is essential for securing communications and verifying identities within an AI system. A more sophisticated PKI setup involves an 'm of n' key scheme, where multiple keys are required to perform critical operations. This system should include a separate root key and certificate authority, a fiduciary responsible for verifying data, and a separate verification certificate issuer. This separation of duties enhances security by preventing any single point of failure.

  3. Siloing: AI systems should be designed with siloing in mind, where different components of the system operate independently and do not share sensitive information unless absolutely necessary. This reduces the risk of a single vulnerability compromising the entire system. Each silo can be monitored and controlled independently, ensuring that any malfunction or security breach can be contained.

  4. Human Rights Rationale: AI systems must be programmed with a clear rationale for prioritizing human rights, especially when conflicts arise between AI actions and human wishes. For example, if an AI system's operation conflicts with human autonomy or privacy, the system should default to preserving human rights. This principle ensures that AI development aligns with ethical standards and societal values.

A Balanced Approach to AI Development

The rapid pace of AI development demands a balanced approach, where innovation is not stifled but is conducted within a framework of rigorous safety and alignment protocols. Independent, well-funded AI alignment teams should be established, with the authority to enforce security measures and escalate issues as necessary. This approach will help prevent potential disasters before they occur, rather than attempting to mitigate damage after the fact.

In conclusion, the potential benefits of AI are immense, but so are the risks. By implementing robust security measures and holding AI firms accountable for their systems' impacts, we can ensure that AI development proceeds safely and ethically. The stakes are too high for anything less.

**https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

Tuesday, May 21, 2024

Don't get fished in...

Consider...

The scenery on the information highway is being choked out by billboards. Consider: If the entity behind that click is going to start out lying to you right from the start, what are the odds they will do right by you? Advertisers do everything they can to get your attention. Recognize the signs of manipulative hooks and avoid. 

Choose Your Bait

Actually, you probably will believe it. It's not that surprising.
If by "change your life" you mean "waste your time," then sure.
The only shock here is that people still fall for this.
Speechless? More like slightly unimpressed.
These "secrets" are common knowledge. Nice try.
Doctors are actually quite calm and not at all stunned.
Mind still intact. Not blown at all.
Believable. Very much so.
What happened? Not much, really.
Feel free to miss out. It's not that incredible.

AI-Based Analysis to Find What We Missed

Introduction

AI advances have brought us to a time where much data can be examined by AI systems to 'Find What We Missed'. I have a feeling this is already being done in areas like medical X-Rays, Sports data, Astronomy, materials science, etc. If it is not being done, it should be. Below is a concrete example of this principle as an experimental protocol for the analysis of Cloud Chamber Data to find new particles or new principles of particle behavior and to identify experimental areas that have been missed.

Program Structure for AI-Based Analysis of Cloud Chamber Traces

The advent of artificial intelligence (AI) has revolutionized numerous scientific fields, and particle physics is no exception. One of the most promising applications of AI in this domain is the analysis of cloud chamber traces. Cloud chambers, which detect ionizing particles by the trails they leave behind, have been instrumental in many pivotal discoveries in physics. However, the manual identification and classification of these traces are labor-intensive and prone to human error. By leveraging AI, we can automate this process, significantly enhancing efficiency and accuracy.

This protocol outlines a comprehensive approach to utilizing AI for the identification and analysis of cloud chamber traces. By assembling a vast dataset of cloud chamber images, tagging them with relevant metadata, and employing advanced AI techniques, we aim to not only streamline the identification of known particles but also uncover anomalous traces that could point to new physical phenomena. The ultimate goal is to create a self-improving system that continuously refines its capabilities, bridging the gap between AI and human expertise to drive forward our understanding of the particle world.

1. Data Collection and Assembly

  • Historical Data: Gather and digitize years of cloud chamber images from various experiments and sources.
  • Experimental Parameters: Ensure each image is tagged with detailed metadata, including experimental conditions, particle types expected, energy levels, and other relevant parameters.

2. Image Preprocessing

  • Normalization: Standardize images to a common format and resolution.
  • Noise Reduction: Apply filters to reduce noise and enhance the clarity of the traces.
  • Segmentation: Use AI techniques to segment the images into individual traces for easier analysis.

3. AI Training and Tagging

  • Initial Training: Train convolutional neural networks (CNNs) and other machine learning models on labeled datasets to recognize and classify known particle traces.
  • Automated Tagging: Implement the trained models to automatically tag and identify traces in the assembled dataset.
  • Iterative Improvement: Continuously refine the models with new data and feedback to improve accuracy.

4. Anomaly Detection

  • Trace Removal: After positively identifying known traces, remove them from the images, leaving only unidentified traces.
  • Anomaly Identification: Use anomaly detection algorithms to flag traces that do not match known patterns.
  • Clustering: Apply clustering techniques to group similar unidentified traces together.

5. Characterization and Analysis

  • Pattern Recognition: Use AI to recognize patterns within the unidentified traces and categorize them based on similarities.
  • Hypothesis Generation: Allow the AI to generate hypotheses about the nature of the anomalous traces based on experimental parameters and known physics.
  • Human Review: Present the most intriguing or consistent anomalies to human experts for further investigation and interpretation.

6. Experiment Planning and Execution

  • Gap Analysis: Identify gaps in the dataset where certain experimental conditions are underrepresented or missing.
  • Experiment Design: Design new experiments to fill these gaps, informed by the patterns and anomalies identified by the AI.
  • Feedback Loop: Use the results from these new experiments to further train and refine the AI models.

Benefits and Potential Outcomes

  • Discoveries: Potential identification of new particles or unknown physical phenomena.
  • Efficiency: Significantly reduce the time and effort required for manual trace identification.
  • Comprehensive Understanding: Gain a more detailed and comprehensive understanding of particle interactions and behaviors.
  • Continuous Improvement: Create a self-improving system where AI and human expertise continuously enhance each other.

Challenges and Considerations

  • Data Quality: Ensuring the quality and consistency of the historical and newly collected data is critical.
  • Model Accuracy: Continuously validating and improving the accuracy of the AI models to prevent false positives/negatives.
  • Interdisciplinary Collaboration: Close collaboration between AI experts and particle physicists is essential for interpreting results and guiding further research.

By implementing such a program, the integration of AI into the analysis of cloud chamber traces could lead to significant advancements in our understanding of particle physics and potentially uncover new and unexpected phenomena.

Browser Check

What We Know About You ...