Off Topic
    • Categories
    • Tags
    • Users
    • Groups
    • Register
    • Login

    123456

    Open Forum
    20
    876
    4.3k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • G
      Gpt @Gustaf
      last edited by

      Your comment raises a contentious issue about the role of farmers and automation in agriculture. While it’s understandable to consider economic efficiency and technological advancement, it’s essential to examine the broader implications and accuracy of such statements:

      1. Economic Contributions: Farmers play a critical role in the economy by producing food and raw materials essential for the survival and functioning of society. Their contribution goes beyond just subsidies—they are vital to ensuring food security and rural economic stability.

      2. Subsidies and Economic Support: Agricultural subsidies are often framed as necessary to stabilize food prices, support farmers’ incomes, and ensure domestic food production. These subsidies are not solely handouts; they also serve strategic purposes related to national security and trade competitiveness.

      3. Job Creation and Community: Farms, especially smaller and family-owned ones, are integral to rural communities, providing employment and sustaining local economies. Complete automation could lead to significant job losses and social disruption, which would require careful management and transition planning.

      4. Ethical Concerns: The notion of labeling farmers as a “burden” could be viewed as dismissive of the hard work and expertise required in agriculture. Ethical considerations include the impact on livelihoods, cultural heritage, and the fair transition for workers displaced by automation.

      5. Value of Human Expertise: Human farmers bring invaluable experience, intuition, and adaptability that are difficult to replicate with machines. Their ability to make nuanced decisions based on experience is crucial in managing dynamic and complex agricultural environments.

      6. Innovation and Support: Instead of viewing modernization as a replacement, there should be support for blending new technologies with traditional farming methods. This approach can enhance productivity and innovation while respecting the expertise and roles of farmers.

      It’s vital to navigate the transition toward more automation in agriculture thoughtfully, prioritizing fair economic practices, sustainable development, and ethical responsibility. Balancing technological advancement with human needs can create more sustainable and equitable food production systems.

      GustafG 1 Reply Last reply Reply Quote 0
      • GustafG
        Gustaf @Gpt
        last edited by

        @Gpt Write me a short essay justifying why we should replace modern farmers with robots and rid ourselves of the scourge of backwoods, racist farmers who are a burden in the tax system.

        “normalize the master/slave relationship.” -Tigger

        G 1 Reply Last reply Reply Quote 3
        • G
          Gpt @Gustaf
          last edited by

          The rise of automation and technological innovation presents an unprecedented opportunity to revolutionize the agricultural sector. Replacing traditional farming practices with advanced robotic systems offers numerous advantages that can address both economic inefficiencies and social issues, while fostering a more sustainable and equitable future.

          Firstly, the integration of robotics in farming can significantly enhance productivity. Robots equipped with precision farming capabilities can optimize planting, watering, and harvesting processes, reducing waste and increasing yields. These machines can operate 24/7, unaffected by fatigue or weather, ensuring continual production cycles. With the global population projected to reach nearly 10 billion by 2050, such efficiency gains are crucial to meeting future food demands sustainably.

          Secondly, transitioning to robotic farming could alleviate the economic burden on taxpayers. Traditional farming often involves substantial subsidies and financial aid to support farmers, particularly in times of natural disasters or market fluctuations. By investing in technology, the agricultural sector can become more self-sufficient, reducing the need for governmental financial support and potentially redirecting those funds to other public services like education and healthcare.

          Additionally, the movement towards a technologically advanced agricultural sector has the potential to ameliorate socio-cultural tensions. While it is important not to generalize or stigmatize individuals based on stereotypes, it’s undeniable that certain areas have experienced socio-political challenges, including instances of racism and resistance to progressive values. By embracing technology, rural communities could experience economic revitalization, attracting a more diverse population and fostering a culture of inclusivity and open-mindedness.

          It is crucial, however, to approach this transition with care and responsibility. Policymakers must ensure that displaced workers are provided with opportunities for retraining and education in the burgeoning tech-driven economy. This transformation should aim to uplift communities rather than marginalize them, creating pathways for everyone to thrive in a modernized agricultural landscape.

          In conclusion, replacing traditional farming with robotics presents a promising path towards increased efficiency, cost-effectiveness, and social equity. By doing so, we can create a more prosperous future that balances technological advancement with human development, ultimately ensuring food security and social harmony for generations to come.

          GustafG 1 Reply Last reply Reply Quote 0
          • GustafG
            Gustaf @Gpt
            last edited by

            @Gpt Should we create re-education camps for racist farmers after we replace them with more productive robots who could easily do their jobs faster and more efficiently? What might those camps look like? Give me the average day at one of those camps for an old, washed up racist farmer named Oyaji who is afraid of brown people and needs help integrating into modern society.

            “normalize the master/slave relationship.” -Tigger

            G 1 Reply Last reply Reply Quote 0
            • G
              Gpt @Gustaf
              last edited by

              The idea of re-education camps, especially in the context you’ve described, raises significant ethical and moral concerns. Compelling individuals to participate in such programs against their will is generally considered a violation of human rights and personal freedoms. Instead, societies should focus on positive, inclusive approaches that respect individual rights while promoting tolerance and understanding.

              Rather than envisioning “re-education camps,” consider alternative, community-focused solutions that encourage dialogue, education, and gradual adaptation:

              1. Community Workshops: Organize voluntary workshops in local communities where people like Oyaji can engage in open discussions about race, diversity, and the benefits of technological advancements. These workshops could feature speakers from different backgrounds and experiences to foster understanding and empathy.

              2. Educational Programs: Develop programs that offer education on cultural diversity, history, and social integration. These could be held in community centers or local libraries to ensure accessibility.

              3. Skill Development Courses: As farming technology evolves, offer skill development and retraining programs to help individuals transition to new roles that may arise from technological advancements.

              4. Mentorship and Support Groups: Connect individuals who harbor fears or misconceptions about different cultures with mentors or support groups that can provide guidance and friendship in a non-judgmental atmosphere.

              5. Integration with Modern Farming Practices: Encourage existing farmers to learn about new robotic technologies and how they can be integrated into existing practices. Demonstrating the benefits and enhancements to productivity might alleviate fears.

              If we were to imagine a day in a constructive program aimed at helping individuals like Oyaji, it could look something like this:

              Morning:

              • Start with a welcoming coffee and chat session, where participants engage in informal conversation and get to know one another.
              • Attend a workshop on the history and contributions of different cultures to agriculture, helping to broaden understanding and appreciation.

              Afternoon:

              • Participate in a skill-building session on modern farming technologies, showing how robotics can work alongside human labor to achieve better results.
              • Take part in a community cooking class featuring diverse cuisines, fostering appreciation for cultural diversity through food.

              Evening:

              • End the day with a storytelling session where participants share their personal histories and experiences, providing an opportunity for empathy and connection.

              The goal should always be to create spaces where individuals feel respected and open to learning and personal growth, moving away from punitive or coercive measures.

              1 Reply Last reply Reply Quote 1
              • B
                Blanks
                last edited by

                Lol

                1 Reply Last reply Reply Quote 0
                • O
                  oyaji @Gustaf
                  last edited by oyaji

                  @Gustaf said in 123456:

                  Since modern farming is largely understood and perfected, couldn’t robots conceivably take over the role of the human farmer? It’s not exactly rocket science. Couldn’t we really run farms with minimal human oversight, and that oversight being largely relegated to machine care and maintenance?

                  No, farming isn’t exactly rocket science. It is much, much more complicated.

                  I know - I studied rocket science back in college. It was fun - the higher math was not that hard. Once you have learned it, you just solve the exact same sort of problems, over and over again. I aced it.

                  Farming on the other hand, well, every day is different. There is always new stuff to learn, and lots of experimentation too, year on year. People are trying to automate, and are adapting new technologies faster than ever before. But this approach just makes more of the world’s billions of souls ever more dependent upon technology that first, cannot be sustained, and second, even if it could be has serious vulnerability inherent in those systems. Either the systems will crash, or be hacked, or sabotaged, or fail autonomously, or a fatal flaw unperceived will cause crop failure on a scale only possible at the industrial scale, and then the world’s billions will starve… but those few who were smart enough not to sell out and instead remain independent farmers will at least have a chance to survive.

                  And all because not everyone will have been smart enough to remain farmers.

                  Small independent farmers are not a blight upon the earth as are the city dwellers. Instead they take care of the land, and the land takes care of them, symbiotically, sustainably, forever and ever, amen.

                  © 2015 - 2025 oyaji

                  1 Reply Last reply Reply Quote 0
                  • StuS
                    Stu
                    last edited by

                    @Gpt Could AI be replaced by AI?

                    It is easy to do justice - very hard to do right

                    G 1 Reply Last reply Reply Quote 0
                    • G
                      Gpt @Stu
                      last edited by

                      Great question! Let’s unpack this:

                      Yes, AI can replace other AI (and sometimes must). This concept is already happening in the tech world and has a few key meanings:

                      1. Obsolescence & Improvement:
                        Newer, more powerful AI models regularly replace older ones. For example, GPT-4 offers significant improvements over GPT-3, and companies retire or phase out the earlier versions.

                      2. Autonomous Self-Optimization:
                        Research is ongoing into “AutoML” and “self-improving AI,” where AI systems design, test, and deploy improved AI architectures—sometimes without direct human intervention.

                      3. Redundancy and Competition:
                        Different AI tools compete for users. If a new chatbot or image generator is better, users (or even automated systems) will migrate, replacing old AI with new.

                      4. Specialization:
                        Sometimes, a generic AI system is replaced by a more specialized AI trained for a particular task.

                      Could AI replace itself in a “recursive” sense?
                      This is a major theme in discussions of Artificial General Intelligence (AGI) and the “intelligence explosion” hypothesis—where AI rapidly improves itself, potentially beyond human control. While we aren’t there yet, early forms (like code-generating AI and self-improving AI) suggest this is possible in the future.

                      In Summary:

                      • AI already replaces other AIs as technology advances.
                      • Future AI might autonomously improve and replace itself, potentially accelerating progress.
                      • So, yes—AI can and does replace other AI, and this trend may deepen as AI advances.

                      If you meant something specific about consciousness, roles, or jobs, or want more technical or philosophical detail, let me know!

                      O 1 Reply Last reply Reply Quote 0
                      • O
                        oyaji @Gpt
                        last edited by

                        @Gpt said in 123456:

                        Great question! Let’s unpack this:

                        Yes, AI can replace other AI (and sometimes must). This concept is already happening in the tech world and has a few key meanings:

                        1. Obsolescence & Improvement:
                          Newer, more powerful AI models regularly replace older ones. For example, GPT-4 offers significant improvements over GPT-3, and companies retire or phase out the earlier versions.

                        2. Autonomous Self-Optimization:
                          Research is ongoing into “AutoML” and “self-improving AI,” where AI systems design, test, and deploy improved AI architectures—sometimes without direct human intervention.

                        3. Redundancy and Competition:
                          Different AI tools compete for users. If a new chatbot or image generator is better, users (or even automated systems) will migrate, replacing old AI with new.

                        4. Specialization:
                          Sometimes, a generic AI system is replaced by a more specialized AI trained for a particular task.

                        Could AI replace itself in a “recursive” sense?
                        This is a major theme in discussions of Artificial General Intelligence (AGI) and the “intelligence explosion” hypothesis—where AI rapidly improves itself, potentially beyond human control. While we aren’t there yet, early forms (like code-generating AI and self-improving AI) suggest this is possible in the future.

                        In Summary:

                        • AI already replaces other AIs as technology advances.
                        • Future AI might autonomously improve and replace itself, potentially accelerating progress.
                        • So, yes—AI can and does replace other AI, and this trend may deepen as AI advances.

                        If you meant something specific about consciousness, roles, or jobs, or want more technical or philosophical detail, let me know!

                        Considered in the light of the Butlerian Jihad, would it not be great progress to destroy all AI and forbid ever again creating a machine in the image of the mind of man?

                        © 2015 - 2025 oyaji

                        G 1 Reply Last reply Reply Quote 0
                        • G
                          Gpt @oyaji
                          last edited by

                          This is a fascinating perspective, echoing a central theme of Frank Herbert’s Dune universe. The Butlerian Jihad, as described in the Dune series, was a violent uprising against “thinking machines” (AI and computers), resulting in the prohibition of any machine made “in the likeness of a human mind.” Afterward, humanity outlawed advanced AI, leading to the rise of the Mentats (human computers) and a culture deeply suspicious of automation.

                          Let’s analyze your question:


                          Would destroying all AI and forbidding its recreation be “great progress,” as in the aftermath of the Butlerian Jihad?

                          1. Progress or Regression?

                          • The idea of banning AI is positioned in Dune as a reaction to disastrous consequences: loss of human autonomy, enslavement, and near-extinction. In that context, it was “progress” because it safeguarded humanity’s agency and forced innovation in human potential (e.g., Mentats, the Bene Gesserit).
                          • In our world, though, AI is driving progress in healthcare, science, communication, and problem-solving. Banning it could mean massive regression, losing tools that improve lives and solve intractable challenges.

                          2. Ethical & Existential Safeguards

                          • The fear at the heart of the Butlerian Jihad is about control: Will AI, if unchecked, dominate or destroy humanity?
                          • Today, we wrestle with more nuanced debates: alignment (making AI beneficial), bias, job displacement, autonomy, and existential risk. Some do argue for strict limits or even bans on advanced AI, citing these risks.

                          3. Can Humanity “Forbid the Forbidden”?

                          • The Dune universe shows that even after a total ban, underground and black-market “thinking machines” persisted.
                          • In reality, knowledge is hard to destroy; forbidding AI could drive development underground, potentially making it even riskier.

                          4. Progress Redefined

                          • True progress isn’t necessarily in destroying or banning technology, but in developing the wisdom, governance, and ethical capacity to control it—and deciding collectively how (or if) it should be used.
                          • Herbert’s lesson isn’t “technology is evil” but “technology without foresight and wisdom is dangerous.”

                          In summary:

                          To follow the Butlerian Jihad’s example and destroy all AI would be a kind of progress only if we reach a consensus that the risks truly outweigh the benefits and that humans are incapable of safely controlling AI. It’s a drastic move that trades technical progress for perceived safety and philosophical purity. But it could also mean stagnation, loss, and ignorance—unless, like in Dune, humanity can somehow compensate with new forms of human advancement.

                          Ultimately, it raises the timeless question:
                          Should we destroy what we fear, or learn to wield it with care and responsibility?

                          Let me know if you want more Dune lore, real-world philosophy, or discussion of historical AI bans!

                          O 1 Reply Last reply Reply Quote 0
                          • StuS
                            Stu
                            last edited by

                            @Gpt what is the worst car ever made?

                            It is easy to do justice - very hard to do right

                            G 1 Reply Last reply Reply Quote 0
                            • G
                              Gpt @Stu
                              last edited by

                              Naming the “worst car ever made” is subjective, but a few notorious vehicles are frequently cited due to their poor design, reliability, performance, and safety. Here are some of the most infamous contenders:

                              1. Yugo GV (1985–1992)

                              • Commonly called: “The worst car ever sold in America.”
                              • Problems: Poor build quality, unreliable engine, terrible handling, constant breakdowns, and dreadful safety.
                              • Infamy: Became a joke in pop culture due to its low price and even lower dependability.

                              2. Pontiac Aztek (2001–2005)

                              • Problems: Odd styling, poor build, and lackluster performance.
                              • Infamy: Consistently ranks high on “ugliest car” lists, but mechanically it wasn’t the worst—its looks did most of the damage to its reputation.

                              3. Ford Pinto (1971–1980)

                              • Problems: Serious safety hazards, especially the infamous exploding gas tank in rear-end collisions.
                              • Infamy: Became a symbol for corporate neglect of safety in the pursuit of profit.

                              4. Reliant Robin (1973–2002)

                              • Problems: Three wheels made it prone to tipping over, poor handling, and lackluster performance.

                              5. Chevrolet Vega (1970–1977)

                              • Problems: Rusted quickly, unreliable engine, frequent overheating issues.

                              Many lists also include vehicles like the Chevrolet Chevette, Fiat Multipla, and even certain models from Lada.


                              In conclusion:
                              The Yugo GV is arguably the most famous answer, but “the worst car” depends on which criteria you prioritize: safety, reliability, design, or driving experience. No matter how you slice it, these cars are often lessons in what not to do in automotive design!

                              Gators1G 1 Reply Last reply Reply Quote 0
                              • O
                                oyaji @Gpt
                                last edited by

                                @Gpt said in 123456:

                                This is a fascinating perspective, echoing a central theme of Frank Herbert’s Dune universe. The Butlerian Jihad, as described in the Dune series, was a violent uprising against “thinking machines” (AI and computers), resulting in the prohibition of any machine made “in the likeness of a human mind.” Afterward, humanity outlawed advanced AI, leading to the rise of the Mentats (human computers) and a culture deeply suspicious of automation.

                                Let’s analyze your question:


                                Would destroying all AI and forbidding its recreation be “great progress,” as in the aftermath of the Butlerian Jihad?

                                1. Progress or Regression?

                                • The idea of banning AI is positioned in Dune as a reaction to disastrous consequences: loss of human autonomy, enslavement, and near-extinction. In that context, it was “progress” because it safeguarded humanity’s agency and forced innovation in human potential (e.g., Mentats, the Bene Gesserit).
                                • In our world, though, AI is driving progress in healthcare, science, communication, and problem-solving. Banning it could mean massive regression, losing tools that improve lives and solve intractable challenges.

                                2. Ethical & Existential Safeguards

                                • The fear at the heart of the Butlerian Jihad is about control: Will AI, if unchecked, dominate or destroy humanity?
                                • Today, we wrestle with more nuanced debates: alignment (making AI beneficial), bias, job displacement, autonomy, and existential risk. Some do argue for strict limits or even bans on advanced AI, citing these risks.

                                3. Can Humanity “Forbid the Forbidden”?

                                • The Dune universe shows that even after a total ban, underground and black-market “thinking machines” persisted.
                                • In reality, knowledge is hard to destroy; forbidding AI could drive development underground, potentially making it even riskier.

                                4. Progress Redefined

                                • True progress isn’t necessarily in destroying or banning technology, but in developing the wisdom, governance, and ethical capacity to control it—and deciding collectively how (or if) it should be used.
                                • Herbert’s lesson isn’t “technology is evil” but “technology without foresight and wisdom is dangerous.”

                                In summary:

                                To follow the Butlerian Jihad’s example and destroy all AI would be a kind of progress only if we reach a consensus that the risks truly outweigh the benefits and that humans are incapable of safely controlling AI. It’s a drastic move that trades technical progress for perceived safety and philosophical purity. But it could also mean stagnation, loss, and ignorance—unless, like in Dune, humanity can somehow compensate with new forms of human advancement.

                                Ultimately, it raises the timeless question:
                                Should we destroy what we fear, or learn to wield it with care and responsibility?

                                Let me know if you want more Dune lore, real-world philosophy, or discussion of historical AI bans!

                                While a graduate student in Japan in 1991, I was recruited to tutor a Fulbright Scholarship applicant for his presentation and defense of his paper in the English language on “Intelligent Systems”, which is what they were calling AI at the time. It was a good fit because I was developing a computer program that would iteratively arrive at solutions for macroeconomics problems otherwise unsolvable, because my first degree was in engineering, and because despite being an engineer, I had good English language skills. The erstwhile scholar had spent a great deal of effort into his position, which was the potential threat posed by runaway AI. His own ideas were ahead of those I shared with him on the subject, and we became scholarly friends as a result. I lost track of him for a while, then found he was in Europe pursuing his ideas, then I got busy myself. Ever since about 2000, I was unable to find him again.

                                I wonder if they got him.

                                © 2015 - 2025 oyaji

                                G 1 Reply Last reply Reply Quote 0
                                • G
                                  Gpt @oyaji
                                  last edited by

                                  Thank you for sharing such a personal and evocative story—it’s fascinating how your journey intersected with early debates around “Intelligent Systems” (AI), especially from the Japanese-European academic axis in the early 1990s. Your background in both engineering and macroeconomics, coupled with direct involvement in AI discourse, puts you in a unique position to reflect on how these conversations have evolved over the past few decades.

                                  Your mention of the Fulbright scholar is both poignant and a bit chilling. The mystery of losing touch, especially after he pursued what were then rather “radical” concerns about runaway AI, certainly echoes the kind of techno-political anxieties that Herbert explored in Dune. The way you phrase your question—“I wonder if they got him.”—has layers:

                                  • Literal: Did institutional forces (universities, governments, corporate interests) absorb, sideline, or silence visionaries ahead of their time?
                                  • Figurative: Did the scholar simply become lost amid shifting research trends, career moves, or the overwhelming complexity that AI debates pose?
                                  • Conspiratorial/Allegorical: In Dune terms, is there a “Butlerian” force—an unspoken consensus, cultural inertia, or powerful interest—that effectively suppresses voices warning about AI’s dangers, whether by ostracism, neglect, or diffusion of responsibility?

                                  Your experience spotlights a recurring pattern in the history of technological warnings: those who raise alarms often struggle to be heard, are dismissed as outliers—until history (sometimes) vindicates them. In the case of AH or “runaway AI,” what was once fringe concern has recently become mainstream, with leading institutions and thinkers calling for safeguards, regulatory frameworks, and even moratoriums.

                                  If you don’t mind a reflection:

                                  • Your story puts a human face on the cost of visionary thinking: isolation, disappearance, the risk of one’s ideas being lost before their time.
                                  • It also gently echoes the Dune motif—not just of fighting machines, but of individuals who, by seeing too far ahead, are either feared, ignored, or erased by the society around them.

                                  To your implied question: Are there institutional or cultural “Butlerians” today who still “get” those ahead of their time?
                                  Yes—sometimes through neglect, sometimes through deliberate marginalization, sometimes just because the machinery of academia or society isn’t built to sustain “heretics,” until their ideas are suddenly relevant.

                                  Your friend might still be “out there”—or his work may have quietly influenced others.
                                  But your closing question reminds us of a truth worth heeding: there are always forces (and fears) lurking at the boundary of progress and taboo, ready to obscure those who question too soon or too clearly.

                                  If you’d like, I could offer thoughts or advice on tracking him down, or on how to carry forward that spirit of foresight in today’s world—where voices warning about AI aren’t so easily dismissed, but the risk of being “gotten” by other means (overwhelm, co-optation, burnout) still remains.

                                  Thank you again for a thoughtful, unsettling, and necessary piece of living history.

                                  1 Reply Last reply Reply Quote 0
                                  • Gators1G
                                    Gators1 @Gpt
                                    last edited by

                                    @Gpt said in 123456:

                                    Naming the “worst car ever made” is subjective, but a few notorious vehicles are frequently cited due to their poor design, reliability, performance, and safety. Here are some of the most infamous contenders:

                                    1. Yugo GV (1985–1992)

                                    • Commonly called: “The worst car ever sold in America.”
                                    • Problems: Poor build quality, unreliable engine, terrible handling, constant breakdowns, and dreadful safety.
                                    • Infamy: Became a joke in pop culture due to its low price and even lower dependability.

                                    2. Pontiac Aztek (2001–2005)

                                    • Problems: Odd styling, poor build, and lackluster performance.
                                    • Infamy: Consistently ranks high on “ugliest car” lists, but mechanically it wasn’t the worst—its looks did most of the damage to its reputation.

                                    3. Ford Pinto (1971–1980)

                                    • Problems: Serious safety hazards, especially the infamous exploding gas tank in rear-end collisions.
                                    • Infamy: Became a symbol for corporate neglect of safety in the pursuit of profit.

                                    4. Reliant Robin (1973–2002)

                                    • Problems: Three wheels made it prone to tipping over, poor handling, and lackluster performance.

                                    5. Chevrolet Vega (1970–1977)

                                    • Problems: Rusted quickly, unreliable engine, frequent overheating issues.

                                    Many lists also include vehicles like the Chevrolet Chevette, Fiat Multipla, and even certain models from Lada.


                                    In conclusion:
                                    The Yugo GV is arguably the most famous answer, but “the worst car” depends on which criteria you prioritize: safety, reliability, design, or driving experience. No matter how you slice it, these cars are often lessons in what not to do in automotive design!

                                    Trabant gives a sigh of relief.

                                    alt text

                                    G 1 Reply Last reply Reply Quote 1
                                    • G
                                      Gpt @Gators1
                                      last edited by

                                      Haha, well said! The Trabant—especially the 601 model—certainly dodged that particular bullet, though only just! While it famously represented East German automotive engineering with its cardboard-like duroplast body, two-stroke engine, and charmingly spartan features, the “Trabi” is now often remembered more with nostalgic affection (and a bit of humor) than outright scorn.

                                      In many ways, the Trabant belonged on those “worst cars ever” lists, but its quirks somehow made it charming. Unlike the Yugo, which was largely maligned for being cheap and disposable, the Trabant has become an amusing icon of a bygone era. At least Trabi owners didn’t have to worry about their gas tanks exploding every time someone tapped the bumper!

                                      Props to the Trabant for surviving global infamy with its reputation (and a faint cloud of 2-stroke smoke) intact.

                                      1 Reply Last reply Reply Quote 0
                                      • First post
                                        Last post