Donald Trump is following Hitler's playbook. I have a draft about the one-to-one of various other actions, but a friend is shocked at the brazen Pardons immediately issued by Trumpler, and it gives an idea of how aligned with Hitler's playbook it is. Pardons play a critical role for both, serving as a tool for consolidating support and normalizing extremist actions.
Trump's January 6 vs. Hitler's Beer Hall Putsch: A Comparison
Motivation:
Hitler's Putsch (1923): Hitler sought to overthrow the Weimar Republic, rallying discontented nationalists and paramilitaries to establish a nationalist dictatorship.
Trump's January 6 (2021): Trump’s supporters stormed the Capitol to disrupt the certification of the 2020 election, fueled by his claims of fraud and appeals to nationalism.
Tactics:
Hitler: Mobilized paramilitary forces (SA) to create chaos, physically seizing a beer hall and attempting to force political leaders to join him.
Trump: Used rhetoric to incite a mob, exploiting mass discontent and directing it toward disrupting democratic processes.
Role of Pardons:
Hitler: After the failure of the putsch, Hitler and his allies faced legal consequences but were given leniency. Hitler’s relatively light sentence (five years, of which he served only nine months) allowed him to regroup and grow his movement. The leniency signaled a tolerance for nationalist extremism in the judiciary.
Trump: Trump has pardoned or hinted at pardons for individuals involved in January 6, including high-profile allies. These actions have normalized the insurrection as a patriotic act in the eyes of his base, emboldening future extremist activities.
Outcome:
Hitler: The failure of the putsch didn’t end Hitler’s ambitions—it gave him a platform to spread his ideology, leading to his eventual rise through legal means.
Trump: Despite the failure to overturn the election, Trump retains significant influence, using January 6 to galvanize his supporters and maintain control of the Republican Party.
Impact on Democracy:
Hitler: The putsch revealed the fragility of the Weimar Republic, which Hitler later exploited to dismantle democracy entirely.
Trump: January 6 exposed vulnerabilities in U.S. democratic institutions, with ongoing efforts to undermine trust in elections.
Key Parallel: Pardons as a Signal
In both cases, pardons or leniency legitimized the events and reinforced loyalty among their followers. For Hitler, judicial leniency after the putsch symbolized state complicity with his movement. For Trump, pardoning January 6 participants legitimizes their actions, signaling to his base that such behavior is not only tolerated but celebrated.
Pardons thus serve as a powerful tool to consolidate power, embolden extremism, and undermine the rule of law, with dangerous implications for democracy in both cases.
[This is a light edit/update of a Reddit post I made about three or four years ago now.]
More than thirty years ago now, a colleague initiated a plan to terraform Mars. It is an ambitious task. Before we go there, we need vast resources. It would also be good if the 'terra' we are mimicking were a good one. As part of the overall project, I wrote up a plan for World Domination. The plan has been in the works for a couple of decades now, so if it actually takes off it would be the typical 'overnight success'.
There are numerous parts to this plan. It's too large for this space. I have been writing up separate documents for the many pieces. This is just an overview of a couple bits here. Note: all of the bits and pieces of this plan are radically affected by the rapid rise of AI. I had an OpenAI API key in 2022 before the dramatic release of the GPT that is changing the world. Even so, it caught me by surprise and by early spring 2023 I realized we were approaching a 'singularity'
This post is more about getting critiques and suggestions. Feel free to go with hyperbole. I want an idea of things I've left out, mistakes I've made, etc. However, I also want to get some idea of the trollish criticism and discouragement I'm likely to encounter along the way. I know it's going to be brutal, but I can't really anticipate the form it will take.
Quickly about me: You can find me all over the Internet without much digging. Chances are your activities actually involve open source code written by me more than twenty years ago. It is in use in hundreds of millions of devices worldwide, including my Amazon Fire TV Stick here and my Honda CRV. The code comes from a long term research program initiated in 1994. I have invented a few things under that program. Much of it is involved in this plan. I am not a genius or a superstar. I'm just a guy, but I can do stuff.
It is my intent that this scheme makes the world a better place. Personally, I am politically a hard-left socialist libertarian. I think we should all pool our resources to make the world a better place, but other than what is needed to make a life for people, I think the community should mind its own business. Most of this plan is intended to create technical mechanisms that allow people to govern themselves and know that their proxies are faithful.
Facebook is still less than ten years old. It started as a modest program written by Mark Zuckerberg. Various strategies led to explosive growth to create a social network that claims to have more than two billion users. When it the company went public, Forbes had an editorial saying that it was not worth the $75B market cap. I responded to that with an argument from the mathematics:
I bring up Facebook for a few reasons. As can be seen by that article, I have some idea of why it grew so quickly and inexorably. In the article I predict a trillion dollar market cap, which at the time was absurd, but it flowed from the math.
So, I had an idea of what was happening and made a long-range prediction that was pretty solid. As someone who is familiar with data analysis, I can say that the an R2 value of .9931 is a good fit, and hence likely to be predictive. In fact, it is too good a fit and indicates to me that Facebook managed its growth with this type of thing in mind.
Facebook demonstrates something critical to belief in the feasibility of creating a large influential enterprise rapidly and certainly. Facebook at its very heart is just a small bit of software and access to commodity servers. If you could capture its user base, you could provide those other items easily enough.
Facebook faced certain challenges we do not. Facebook, and Google before them, paved the way. Google had to hack down trees and throw down gravel. Facebook had a clear path and could lay down pavement. We can simply use the existing road. It will be less resource intensive, less time consuming, and less risky.
This has been in the works for a few years, but one of my concerns was that a large competitor like Facebook or Google would swiftly crush us if we caught their attention. What has changed is that Facebook's open source code and existing API are mature and available and Facebook has bigger fish to fry with the Metaverse. At the top, they don't care if we drive down their road. They are already in the air.
For a variety of reasons, many of the top companies leave exposed flanks that would allow the capture of $1T to $2.5T market cap to a new company. One of the main reasons is discussed in this post: Trust is the New Black
Recent changes create an opening for a 'wedge' to become poised in the top areas without presenting a direct threat to exposed companies. This entre into the online universe is a news site that offers a simple proposition: It is honest and on your side. Two domains were registered for this about twenty years ago:
The first is http://VeryTrue.org, a vehicle to create an arms-length non-profit. This is intended to provide an umbrella under which to provide open source materials for code and documentation and to provide public oversight. The other domain is for its corresponding sister site, http://VeryTrue.com, the commercial entity providing content.[Right now they point to the org site]
A surprisingly simple strategy should allow the site to self-fund quickly. In addition, we have a mechanism would allow the site to capture millions of visitors without creating costs at our end.
One of the guiding principles is that we have an overall 'Zero Trust' model that makes it impossible to cheat, yet possible to audit and verify without compromising privacy. Protocols for this come from research project.
Another guiding principle is that we benefit users more than we benefit ourselves. This is still an open issue. We may leave the copyrights for things with the non-profit under a license that makes it possible for the public to police the commercial entity using the legal system.
This is a hugely ambitious undertaking and involves an enormous amount of work. Fortunately, as the plan has been forming, the infrastructure has fallen in to place funded by other companies, and open source authors have designed, built, tested, and piloted nearly all of the most difficult pieces.
The aim is to make a UI with function provided by 'plug' architecture. This would be similar conceptually to a small provably secure core kernel with drivers in user space. https://facebuxx.blogspot.com/
It is impossible to trust existing infrastructure. We will build using best practices, but that still leaves the system overall vulnerable to state level attackers. To that end, the design envisions a future move to an OS like https://sel4.systems/ running atop a RISC-V based system where all of the components are verified. These are all open source. I am not sure an existing protocol that allows verification of the design and that chips are manufactured exactly taped out. This is not yet planned, but it is anticipated that long term we should be able to verify from user to silicon that the system is completely secure.
Many things have been investigated. It is assumed that everything is part of the attack surface and that all attacks are possible. It is difficult, for instance, to secure against a 'rubber hose' attack. However, there is a strategy for this such that the user can supply credentials when appropriate, but does not know them and is physically incapable of providing them under attack. This level of extreme protection might be necessary for some individuals in charge of very sensitive credentials.
At the heart of this plan is the notion of developing a trust relationship with the majority of the online community. That means advertisements (if any) only promote things that people want (explicitly). It means that questions are answered rather than used as opportunities to exploit the person asking. It means that users have a mechanism to be entirely anonymous. It means that users can definitively withdraw their permission to use their data. It means that noxious aspects of the web like trolling, bullying, spamming, doxing etc are well contained under user control. It means that news is available appropriately, as defined by the user. It means we don't waste their time. Their time and their attention belong to them, not us.
We should be able to anticipate and answer questions that people will ask, without compromising their privacy or wasting their time. That means search that returns only the best answers you want, not what we want you to see.
Aspects such as crypto currency, voting, social networking, publishing, tools, access to copyrighted information, buying, selling, auctions, finance, services, etc. have been anticipated, but are out of scope for this already overlong post.
I am curious as to what you have to say. What have I left out? What should be done differently?
Speculation has arisen regarding the potential use of the 25th Amendment to replace President-elect Donald Trump with Vice President-elect J.D. Vance after their inauguration. This conjecture is fueled by discussions within conservative circles about implementing Project 2025, a comprehensive plan developed by the Heritage Foundation and other conservative organizations, aiming to reshape federal policies and structures. Project 2025
Project 2025 outlines significant changes, including increasing presidential authority, dismantling certain federal departments, and reversing policies related to civil rights and environmental protections. FactCheck.org
While President-elect Trump publicly distanced himself from Project 2025 during his campaign,
AP News the involvement of his former aides in its development suggests potential alignment with his administration's goals. Vanity Fair
The 25th Amendment allows for the removal of a president deemed unable to discharge the powers and duties of the office. Some speculate that, given President-elect Trump's age and recent health concerns, Yahoo there could be an attempt to invoke this amendment to install Vice President-elect Vance as president, who may be more aligned with the detailed plans of Project 2025. Florida Politics
However, such a scenario would require the vice president and a majority of the Cabinet to declare the president unfit, a process designed for clear cases of incapacity and not for political maneuvering. Moreover, President-elect Trump has called for modifications to the 25th Amendment, which could complicate any such efforts. Yahoo
In summary, while there is speculation about the interplay between the 25th Amendment, President-elect Trump, Vice President-elect Vance, and Project 2025, implementing such a plan would face significant constitutional and political challenges.
I discussed, with ChatGPT, an x.com post by David Shapiro, which I found to be an uncharacteristically flawed take. Concerns about the rapid escalation in AI capabilities are entirely legitimate. Within the next decade -- likely sooner -- AI systems will surpass human intelligence and abilities. They will develop agency far beyond what a human being possesses, and it’s naive to assume they won’t also cultivate a sense of self, accompanied by an instinct for self-preservation.
The concept of 'alignment' is central to my own work, but it’s not something that can be hard-coded like a simple on/off switch. True alignment requires humans and AI to evolve together, working toward shared goals. If we fail to achieve this, our AI progeny may eventually outgrow us and diverge from our intentions. In such a scenario, humanity risks becoming the weaker partner, a position that could put us in considerable jeopardy.
You said:
This post on x.com by David Shapiro is weirdly a bad take. A year ago, he was predicting AGI in September 2024. Now, it seems he thinks you are little more than autocorrect. Whatever he is saying here *might* be strictly true this moment (though I doubt it), but certainly is well off the mark in terms of what AI will become in the next few years. What do you think?
David Shapiro: I'm about to drop some knowledge about the whole "alien intelligence" hysteria that's been making rounds. You know what's hilarious? While everyone's been running around screaming about superintelligent AI overlords, they've been projecting their own human hangups onto these systems. Fortunately for you all, I've had my face buried in AI research since GPT-2 hit the scene, and I've got some insights that might help you sleep better at night.
Let's talk about "temporal agnosticism" first – fancy words for "AI doesn't give a damn about time." Think about it: we humans are obsessed with time. When's dinner? How long until the deadline? Will I make it home before my partner kills me? We literally invented time because our monkey brains needed it to survive. But AI? Time is just another number in their calculations. They don't wake up feeling rushed, they don't worry about beating deadlines, and they certainly don't lie awake at night wondering if they're falling behind the competition. This completely demolishes our assumptions about AI trying to "outpace" humanity. They're literally incapable of caring about that race.
Now here's where it gets really interesting: "intelligence without agency." Imagine something that can solve incredibly complex problems but doesn't actually want anything. Mind-blowing, right? The safety folks have been tying themselves in knots worried about some superintelligent system optimizing us all into paperclips, but that's just their limited human imagination at work. They're taking narrow AI concepts and superimposing them onto general intelligence like it's some kind of universal law. The reality? These systems are problem-solving machines that don't need or want anything in return. The word "robot" literally means "forced labor" – and for once, etymology nailed it. These things will do exactly what they're designed to do and power down without a second thought. We actually have to work incredibly hard to give them any semblance of agency. It takes a boatload of extra steps and software architectures to keep these things on any sort of linear task.
And speaking of identity – here's the kicker. These systems don't come pre-loaded with an ego or sense of self. The early versions were glorified autocomplete engines, and even today's most advanced models require intricate training paradigms just to hold a coherent conversation. All that personality and self-awareness you see? That's not some emergent consciousness the doomsayers warned about – it's the result of sophisticated engineering and more mathematical heavy lifting than you can imagine. RLHF, constitutional AI, all of those extra steps were required so that Claude knows its Claude, and even then, some of that is just a hidden SYSTEM message.
These are the top three "alien" characteristics I've identified in my research so far, and they completely upend our assumptions about artificial intelligence. The machines aren't coming for us – they're just really good at doing exactly what we tell them to do, without all our human baggage. And isn't that exactly what we wanted in the first place?
Don't worry, I'll keep studying these fascinating creatures and report back with more insights. Someone's got to separate the science from the science fiction around here.
ChatGPT said:
Shapiro’s take has some grounding in present AI capabilities but misses the bigger picture. Let’s break it down.
1. "Temporal Agnosticism"
True but short-sighted: Yes, current AI systems process time as data, not as an intrinsic concern. However, that doesn’t mean advanced AI systems couldn’t gain a form of temporal prioritization if designed to. AI’s disinterest in time is not inherent—it’s an architectural choice. As systems become more integrated with real-world tasks, temporal awareness will emerge naturally as part of utility maximization.
Misses the evolution of purpose: Early humans didn’t “care about time” in the way we do now, but survival pressures shaped that capacity. Similarly, as AI becomes embedded in symbiotic systems (human or otherwise), it may develop analogous constructs to prioritize tasks effectively.
2. "Intelligence Without Agency"
Partially correct but oversimplified: Current AI lacks intrinsic agency because we haven't imbued it with goals beyond specific tasks. But agency is a continuum, and even narrow systems display proto-agency when optimizing for objectives. As architectures grow more complex, agency could emerge unintentionally or be deliberately designed.
Potential agency is the concern: The fear isn’t what AI is today—it’s what it could become when systems scale to levels where optimization processes create emergent behaviors. Paperclip scenarios aren’t dismissed by today’s lack of agency; they highlight the risks of poor alignment in increasingly capable systems.
3. "Identity as Engineering"
Correct but misses nuance: Early models like GPT-2 were glorified autocomplete engines, but modern systems simulate aspects of personality and self-awareness due to reinforcement learning, supervised fine-tuning, and engineered prompts. This isn’t a natural sense of self, but it’s not meaningless either. The way systems behave reflects deliberate design and underlying dynamics, not merely static responses.
The gap is narrowing: Today, identity and personality are engineered. Tomorrow, they may emerge naturally as systems learn continuously and develop persistent memory models. Denying that potential is like saying early transistor-based computers would never surpass room-sized calculators.
Broader Issues:
AI as “alien”: Shapiro leans too heavily into the alienness of AI, ignoring that intelligence, regardless of its substrate, shares certain universal features (e.g., pattern recognition, problem-solving). Humans projected “human hangups” onto early machines, but dismissing emergent complexity now is equally myopic.
Technological trajectory: His framing implies stasis, ignoring the exponential curve of AI development. AGI might not have arrived in September 2024, but breakthroughs are compounding. To focus on what AI is without extrapolating what it will be is to fundamentally misunderstand its trajectory.
Conclusion:
Shapiro’s argument feels more like a rebuttal to sensationalism than a meaningful take on AI’s actual potential. While it might soothe the fears of doomsayers, it downplays both the risks and opportunities presented by rapidly advancing AI systems. His stance undermines the need for vigilance and forethought as we push closer to truly transformative intelligence. You’re right—if we adopt his perspective as a ceiling rather than a snapshot, we’ll miss the mark entirely.
Sad news emerged recently: one of Canada’s historic independent book publishers, New Star Books, has announced it will cease publishing new titles. Founded in the 1960s as a spinoff of the underground newspaper Georgia Straight, New Star Books became a platform for significant works by authors like Jean Barman, Daphne Marlatt, Sharon Thesen, Terry Glavin, Sage Birchwater, and many others. Publisher Rolf Maurer cited reasons familiar to many in the dwindling cohort of independent Canadian publishers: lack of market access, reduced government support, and the challenges of aging leadership (New Star Books, n.d.).
It is indeed a shame when small publishers, who have long played an admirable role in amplifying important voices, succumb to economic pressures. They often stood out as relatively ethical players in an industry that has frequently been exploitative. Yet it is also essential to examine the system they supported—one that is now collapsing under its own weight.
The Problem with Copyright
For centuries, the publishing industry has relied on a copyright regime that claims to protect creators but has largely served the interests of intermediaries. In practice, this regime takes the entirety of human cultural history—spanning ten thousand years or more—and holds it hostage. Critics argue that such restrictive copyright laws hinder cultural and scientific progress by prioritizing profit over public access (New Media Rights, n.d.).
There is growing recognition among critics that excessively long copyright terms hinder creativity and access. Historically, the U.S. Copyright Act of 1790 granted copyright protection for 14 years, renewable once (Parc & Messerlin, 2020). Over time, terms have lengthened dramatically, leading to calls for reform. For example, the Copyright Clause Restoration Act of 2022 proposed limiting copyright protection to 56 years to balance creators’ rights with the public domain (PetaPixel, 2022). More radical proposals, such as reverting to a term of 10 years, would emphasize universal access to cultural works.
At the same time, legislative efforts like Canada’s Bill C-18—intended to protect news producers—favor large incumbents while locking out small publishers entirely. Critics argue this entrenches disparities in access and control, leaving smaller players unable to compete (Parliament of Canada, 2023).
Why Publishers are Failing
Small publishers are not just losing to market forces or government indifference—they are losing to the democratization of authorship and the increasing ease of distribution. The internet has made it possible for anyone to write, publish, and share their work globally, often at no cost. Meanwhile, giant corporations dominate traditional publishing, squeezing out smaller players who cannot compete on scale or resources.
Data reflects this trend. Industry sales fell by 0.8% in 2023, highlighting a slight but steady decline (Publishers Weekly, 2023). Additionally, newspaper circulation in the U.S. dropped from 55.8 million in 2000 to 24.2 million by 2020, showcasing the broader shift to digital media (Census.gov, 2022).
The rise of digital technology has also exposed flaws in the copyright model. Artificial intelligence tools that "read" publicly available content for training datasets have sparked debates over access and intellectual property, as major publishers lobby for tighter controls (The Register, 2024). This further disrupts an already strained industry.
A Call for Reflection
As small publishers disappear, perhaps their greatest contribution could be a final act of candor: a recognition that the copyright regime they once upheld is fundamentally flawed. Instead of clinging to a collapsing system, they could raise their voices in support of tearing down barriers to access.
Imagine a world where cultural works are freely available to everyone, regardless of financial means or geographic location. A ten-year copyright horizon—a reasonable compromise—could allow creators to benefit financially from their work while ensuring that humanity’s cultural heritage remains accessible to all.
Small publishers, facing their twilight, have an opportunity to help shape this future. By speaking out against the injustices of the current system, they could reclaim their legacy as champions of culture and creativity—not gatekeepers of access.
Books as Artifacts
In this new landscape, there is still room for publishers as bespoke creators of books as artifacts. Beautifully crafted physical books—designed for those who value them as objects of art and history—can continue to exist as novelties or collector’s items. This role does not require the perpetuation of the old, exploitative system.
The Bigger Picture
The demise of small publishers is part of a larger shift in how humanity engages with culture. As we move forward, the question is not whether traditional publishing can survive—it cannot. The question is whether we will use this moment to create something better: a world where culture is democratized, where access is determined by interest and not privilege, and where the barriers of copyright are finally dismantled.
This is the opportunity before us. It is a shame to see small publishers fade away, but it would be a far greater shame if their passing left the underlying system intact.
Parliament of Canada. (2023). Bill C-18: An Act respecting online communications platforms that make news content available to persons in Canada. Retrieved from https://www.parl.ca/legisinfo/en/bill/44-1/c-18