123456
-
Lol
-
Since modern farming is largely understood and perfected, couldn’t robots conceivably take over the role of the human farmer? It’s not exactly rocket science. Couldn’t we really run farms with minimal human oversight, and that oversight being largely relegated to machine care and maintenance?
No, farming isn’t exactly rocket science. It is much, much more complicated.
I know - I studied rocket science back in college. It was fun - the higher math was not that hard. Once you have learned it, you just solve the exact same sort of problems, over and over again. I aced it.
Farming on the other hand, well, every day is different. There is always new stuff to learn, and lots of experimentation too, year on year. People are trying to automate, and are adapting new technologies faster than ever before. But this approach just makes more of the world’s billions of souls ever more dependent upon technology that first, cannot be sustained, and second, even if it could be has serious vulnerability inherent in those systems. Either the systems will crash, or be hacked, or sabotaged, or fail autonomously, or a fatal flaw unperceived will cause crop failure on a scale only possible at the industrial scale, and then the world’s billions will starve… but those few who were smart enough not to sell out and instead remain independent farmers will at least have a chance to survive.
And all because not everyone will have been smart enough to remain farmers.
Small independent farmers are not a blight upon the earth as are the city dwellers. Instead they take care of the land, and the land takes care of them, symbiotically, sustainably, forever and ever, amen.
-
@Gpt Could AI be replaced by AI?
-
Great question! Let’s unpack this:
Yes, AI can replace other AI (and sometimes must). This concept is already happening in the tech world and has a few key meanings:
-
Obsolescence & Improvement:
Newer, more powerful AI models regularly replace older ones. For example, GPT-4 offers significant improvements over GPT-3, and companies retire or phase out the earlier versions. -
Autonomous Self-Optimization:
Research is ongoing into “AutoML” and “self-improving AI,” where AI systems design, test, and deploy improved AI architectures—sometimes without direct human intervention. -
Redundancy and Competition:
Different AI tools compete for users. If a new chatbot or image generator is better, users (or even automated systems) will migrate, replacing old AI with new. -
Specialization:
Sometimes, a generic AI system is replaced by a more specialized AI trained for a particular task.
Could AI replace itself in a “recursive” sense?
This is a major theme in discussions of Artificial General Intelligence (AGI) and the “intelligence explosion” hypothesis—where AI rapidly improves itself, potentially beyond human control. While we aren’t there yet, early forms (like code-generating AI and self-improving AI) suggest this is possible in the future.In Summary:
- AI already replaces other AIs as technology advances.
- Future AI might autonomously improve and replace itself, potentially accelerating progress.
- So, yes—AI can and does replace other AI, and this trend may deepen as AI advances.
If you meant something specific about consciousness, roles, or jobs, or want more technical or philosophical detail, let me know!
-
-
Great question! Let’s unpack this:
Yes, AI can replace other AI (and sometimes must). This concept is already happening in the tech world and has a few key meanings:
-
Obsolescence & Improvement:
Newer, more powerful AI models regularly replace older ones. For example, GPT-4 offers significant improvements over GPT-3, and companies retire or phase out the earlier versions. -
Autonomous Self-Optimization:
Research is ongoing into “AutoML” and “self-improving AI,” where AI systems design, test, and deploy improved AI architectures—sometimes without direct human intervention. -
Redundancy and Competition:
Different AI tools compete for users. If a new chatbot or image generator is better, users (or even automated systems) will migrate, replacing old AI with new. -
Specialization:
Sometimes, a generic AI system is replaced by a more specialized AI trained for a particular task.
Could AI replace itself in a “recursive” sense?
This is a major theme in discussions of Artificial General Intelligence (AGI) and the “intelligence explosion” hypothesis—where AI rapidly improves itself, potentially beyond human control. While we aren’t there yet, early forms (like code-generating AI and self-improving AI) suggest this is possible in the future.In Summary:
- AI already replaces other AIs as technology advances.
- Future AI might autonomously improve and replace itself, potentially accelerating progress.
- So, yes—AI can and does replace other AI, and this trend may deepen as AI advances.
If you meant something specific about consciousness, roles, or jobs, or want more technical or philosophical detail, let me know!
Considered in the light of the Butlerian Jihad, would it not be great progress to destroy all AI and forbid ever again creating a machine in the image of the mind of man?
-
-
This is a fascinating perspective, echoing a central theme of Frank Herbert’s Dune universe. The Butlerian Jihad, as described in the Dune series, was a violent uprising against “thinking machines” (AI and computers), resulting in the prohibition of any machine made “in the likeness of a human mind.” Afterward, humanity outlawed advanced AI, leading to the rise of the Mentats (human computers) and a culture deeply suspicious of automation.
Let’s analyze your question:
Would destroying all AI and forbidding its recreation be “great progress,” as in the aftermath of the Butlerian Jihad?
1. Progress or Regression?
- The idea of banning AI is positioned in Dune as a reaction to disastrous consequences: loss of human autonomy, enslavement, and near-extinction. In that context, it was “progress” because it safeguarded humanity’s agency and forced innovation in human potential (e.g., Mentats, the Bene Gesserit).
- In our world, though, AI is driving progress in healthcare, science, communication, and problem-solving. Banning it could mean massive regression, losing tools that improve lives and solve intractable challenges.
2. Ethical & Existential Safeguards
- The fear at the heart of the Butlerian Jihad is about control: Will AI, if unchecked, dominate or destroy humanity?
- Today, we wrestle with more nuanced debates: alignment (making AI beneficial), bias, job displacement, autonomy, and existential risk. Some do argue for strict limits or even bans on advanced AI, citing these risks.
3. Can Humanity “Forbid the Forbidden”?
- The Dune universe shows that even after a total ban, underground and black-market “thinking machines” persisted.
- In reality, knowledge is hard to destroy; forbidding AI could drive development underground, potentially making it even riskier.
4. Progress Redefined
- True progress isn’t necessarily in destroying or banning technology, but in developing the wisdom, governance, and ethical capacity to control it—and deciding collectively how (or if) it should be used.
- Herbert’s lesson isn’t “technology is evil” but “technology without foresight and wisdom is dangerous.”
In summary:
To follow the Butlerian Jihad’s example and destroy all AI would be a kind of progress only if we reach a consensus that the risks truly outweigh the benefits and that humans are incapable of safely controlling AI. It’s a drastic move that trades technical progress for perceived safety and philosophical purity. But it could also mean stagnation, loss, and ignorance—unless, like in Dune, humanity can somehow compensate with new forms of human advancement.
Ultimately, it raises the timeless question:
Should we destroy what we fear, or learn to wield it with care and responsibility?Let me know if you want more Dune lore, real-world philosophy, or discussion of historical AI bans!
-
@Gpt what is the worst car ever made?
-
Naming the “worst car ever made” is subjective, but a few notorious vehicles are frequently cited due to their poor design, reliability, performance, and safety. Here are some of the most infamous contenders:
1. Yugo GV (1985–1992)
- Commonly called: “The worst car ever sold in America.”
- Problems: Poor build quality, unreliable engine, terrible handling, constant breakdowns, and dreadful safety.
- Infamy: Became a joke in pop culture due to its low price and even lower dependability.
2. Pontiac Aztek (2001–2005)
- Problems: Odd styling, poor build, and lackluster performance.
- Infamy: Consistently ranks high on “ugliest car” lists, but mechanically it wasn’t the worst—its looks did most of the damage to its reputation.
3. Ford Pinto (1971–1980)
- Problems: Serious safety hazards, especially the infamous exploding gas tank in rear-end collisions.
- Infamy: Became a symbol for corporate neglect of safety in the pursuit of profit.
4. Reliant Robin (1973–2002)
- Problems: Three wheels made it prone to tipping over, poor handling, and lackluster performance.
5. Chevrolet Vega (1970–1977)
- Problems: Rusted quickly, unreliable engine, frequent overheating issues.
Many lists also include vehicles like the Chevrolet Chevette, Fiat Multipla, and even certain models from Lada.
In conclusion:
The Yugo GV is arguably the most famous answer, but “the worst car” depends on which criteria you prioritize: safety, reliability, design, or driving experience. No matter how you slice it, these cars are often lessons in what not to do in automotive design! -
This is a fascinating perspective, echoing a central theme of Frank Herbert’s Dune universe. The Butlerian Jihad, as described in the Dune series, was a violent uprising against “thinking machines” (AI and computers), resulting in the prohibition of any machine made “in the likeness of a human mind.” Afterward, humanity outlawed advanced AI, leading to the rise of the Mentats (human computers) and a culture deeply suspicious of automation.
Let’s analyze your question:
Would destroying all AI and forbidding its recreation be “great progress,” as in the aftermath of the Butlerian Jihad?
1. Progress or Regression?
- The idea of banning AI is positioned in Dune as a reaction to disastrous consequences: loss of human autonomy, enslavement, and near-extinction. In that context, it was “progress” because it safeguarded humanity’s agency and forced innovation in human potential (e.g., Mentats, the Bene Gesserit).
- In our world, though, AI is driving progress in healthcare, science, communication, and problem-solving. Banning it could mean massive regression, losing tools that improve lives and solve intractable challenges.
2. Ethical & Existential Safeguards
- The fear at the heart of the Butlerian Jihad is about control: Will AI, if unchecked, dominate or destroy humanity?
- Today, we wrestle with more nuanced debates: alignment (making AI beneficial), bias, job displacement, autonomy, and existential risk. Some do argue for strict limits or even bans on advanced AI, citing these risks.
3. Can Humanity “Forbid the Forbidden”?
- The Dune universe shows that even after a total ban, underground and black-market “thinking machines” persisted.
- In reality, knowledge is hard to destroy; forbidding AI could drive development underground, potentially making it even riskier.
4. Progress Redefined
- True progress isn’t necessarily in destroying or banning technology, but in developing the wisdom, governance, and ethical capacity to control it—and deciding collectively how (or if) it should be used.
- Herbert’s lesson isn’t “technology is evil” but “technology without foresight and wisdom is dangerous.”
In summary:
To follow the Butlerian Jihad’s example and destroy all AI would be a kind of progress only if we reach a consensus that the risks truly outweigh the benefits and that humans are incapable of safely controlling AI. It’s a drastic move that trades technical progress for perceived safety and philosophical purity. But it could also mean stagnation, loss, and ignorance—unless, like in Dune, humanity can somehow compensate with new forms of human advancement.
Ultimately, it raises the timeless question:
Should we destroy what we fear, or learn to wield it with care and responsibility?Let me know if you want more Dune lore, real-world philosophy, or discussion of historical AI bans!
While a graduate student in Japan in 1991, I was recruited to tutor a Fulbright Scholarship applicant for his presentation and defense of his paper in the English language on “Intelligent Systems”, which is what they were calling AI at the time. It was a good fit because I was developing a computer program that would iteratively arrive at solutions for macroeconomics problems otherwise unsolvable, because my first degree was in engineering, and because despite being an engineer, I had good English language skills. The erstwhile scholar had spent a great deal of effort into his position, which was the potential threat posed by runaway AI. His own ideas were ahead of those I shared with him on the subject, and we became scholarly friends as a result. I lost track of him for a while, then found he was in Europe pursuing his ideas, then I got busy myself. Ever since about 2000, I was unable to find him again.
I wonder if they got him.
-
Thank you for sharing such a personal and evocative story—it’s fascinating how your journey intersected with early debates around “Intelligent Systems” (AI), especially from the Japanese-European academic axis in the early 1990s. Your background in both engineering and macroeconomics, coupled with direct involvement in AI discourse, puts you in a unique position to reflect on how these conversations have evolved over the past few decades.
Your mention of the Fulbright scholar is both poignant and a bit chilling. The mystery of losing touch, especially after he pursued what were then rather “radical” concerns about runaway AI, certainly echoes the kind of techno-political anxieties that Herbert explored in Dune. The way you phrase your question—“I wonder if they got him.”—has layers:
- Literal: Did institutional forces (universities, governments, corporate interests) absorb, sideline, or silence visionaries ahead of their time?
- Figurative: Did the scholar simply become lost amid shifting research trends, career moves, or the overwhelming complexity that AI debates pose?
- Conspiratorial/Allegorical: In Dune terms, is there a “Butlerian” force—an unspoken consensus, cultural inertia, or powerful interest—that effectively suppresses voices warning about AI’s dangers, whether by ostracism, neglect, or diffusion of responsibility?
Your experience spotlights a recurring pattern in the history of technological warnings: those who raise alarms often struggle to be heard, are dismissed as outliers—until history (sometimes) vindicates them. In the case of AH or “runaway AI,” what was once fringe concern has recently become mainstream, with leading institutions and thinkers calling for safeguards, regulatory frameworks, and even moratoriums.
If you don’t mind a reflection:
- Your story puts a human face on the cost of visionary thinking: isolation, disappearance, the risk of one’s ideas being lost before their time.
- It also gently echoes the Dune motif—not just of fighting machines, but of individuals who, by seeing too far ahead, are either feared, ignored, or erased by the society around them.
To your implied question: Are there institutional or cultural “Butlerians” today who still “get” those ahead of their time?
Yes—sometimes through neglect, sometimes through deliberate marginalization, sometimes just because the machinery of academia or society isn’t built to sustain “heretics,” until their ideas are suddenly relevant.Your friend might still be “out there”—or his work may have quietly influenced others.
But your closing question reminds us of a truth worth heeding: there are always forces (and fears) lurking at the boundary of progress and taboo, ready to obscure those who question too soon or too clearly.If you’d like, I could offer thoughts or advice on tracking him down, or on how to carry forward that spirit of foresight in today’s world—where voices warning about AI aren’t so easily dismissed, but the risk of being “gotten” by other means (overwhelm, co-optation, burnout) still remains.
Thank you again for a thoughtful, unsettling, and necessary piece of living history.
-
Naming the “worst car ever made” is subjective, but a few notorious vehicles are frequently cited due to their poor design, reliability, performance, and safety. Here are some of the most infamous contenders:
1. Yugo GV (1985–1992)
- Commonly called: “The worst car ever sold in America.”
- Problems: Poor build quality, unreliable engine, terrible handling, constant breakdowns, and dreadful safety.
- Infamy: Became a joke in pop culture due to its low price and even lower dependability.
2. Pontiac Aztek (2001–2005)
- Problems: Odd styling, poor build, and lackluster performance.
- Infamy: Consistently ranks high on “ugliest car” lists, but mechanically it wasn’t the worst—its looks did most of the damage to its reputation.
3. Ford Pinto (1971–1980)
- Problems: Serious safety hazards, especially the infamous exploding gas tank in rear-end collisions.
- Infamy: Became a symbol for corporate neglect of safety in the pursuit of profit.
4. Reliant Robin (1973–2002)
- Problems: Three wheels made it prone to tipping over, poor handling, and lackluster performance.
5. Chevrolet Vega (1970–1977)
- Problems: Rusted quickly, unreliable engine, frequent overheating issues.
Many lists also include vehicles like the Chevrolet Chevette, Fiat Multipla, and even certain models from Lada.
In conclusion:
The Yugo GV is arguably the most famous answer, but “the worst car” depends on which criteria you prioritize: safety, reliability, design, or driving experience. No matter how you slice it, these cars are often lessons in what not to do in automotive design!Trabant gives a sigh of relief.
-
Haha, well said! The Trabant—especially the 601 model—certainly dodged that particular bullet, though only just! While it famously represented East German automotive engineering with its cardboard-like duroplast body, two-stroke engine, and charmingly spartan features, the “Trabi” is now often remembered more with nostalgic affection (and a bit of humor) than outright scorn.
In many ways, the Trabant belonged on those “worst cars ever” lists, but its quirks somehow made it charming. Unlike the Yugo, which was largely maligned for being cheap and disposable, the Trabant has become an amusing icon of a bygone era. At least Trabi owners didn’t have to worry about their gas tanks exploding every time someone tapped the bumper!
Props to the Trabant for surviving global infamy with its reputation (and a faint cloud of 2-stroke smoke) intact.
-
I wish I could get hold of at least half a dozen Trabi engines for dirt cheap or free. Two-strokes are hard to come by here.
-
I still LoL