What are we working toward? AI with Autonomy, or HI with Blockchain?

https://trybe.one/what-are-we-working-toward-ai-with-autonomy-or-hi-with-blockchain/?action=bp_sap_mark_read&_wpnonce=57c4e15678

https://www.youtube.com/watch?v=cidZRD3NzHg

What are we working toward? AI (Artificial Intelligence) with Autonomy, or HI (Human Intelligence) with Blockchain?

In my humble opinion, Gilder is merely challenging the current media mindset and focus on Autonomous AI as our future by introducing a new alternative mental model to the “singularity” frame held in the minds of the masses and technophiles alike. Over the past 20+ years, we have had lots of news and entertainment that have hyped AI as our medium/near-term technology future. The problem is, the picture painted by this future is not all that positive. Sure it’s convenient when GoogleVoice understands what we say and offers coherent replies. But, what happens when someday GoogleVoice decides that what we ask for is not important? Or worse, it wants to give us incorrect information - for whatever reason.

Real or not, that’s the fear. Consider a few AI related movies like iRobot, AI and West World, etc. Or worse, we all fear the HAL9000, yet these are billed as now achievable in the near/mid-term - no matter how scary. What if like in 2001, they turn out to be dangerous, how can we be sure once we let these thinking machines go off on their own. Some of our tech “leaders” (i.e., Musk and others) and the masses don’t seem to think it’s safe. And if pushed much further, without strict oversight and too quickly, might some of the masses pick up pitchforks? Does fear of AI ease or fan the flames within the current political environment?

In my opinion, AI is surrounded by fear and lots of questions like 1) how does AI make money for the masses - when it takes away everyone’s job? 2) current autonomous vehicle technologies like Tesla’s self-driving cars (whether AI or not - it feels like AI) have a poor crash record, sometimes killing the passengers in all-consuming fires. 3) sprinting Boston Dynamics AI robot-dogs (and humanoid “Terminators”) look like they could run down and kill everyone, 4) AI-powered micro-drones that fly explosive shape-charges into human skulls and blow brains out!!? That’s uber scary.

All these AI future-technologies represent the terrifying, out of control, technology future. Technology in the past was always sold as utopian and improving the lives of people, not imprisoning them. A future of killer cars, terminators, and undetectable flying micro-assassins is not hopeful. That’s insane, infringes on human rights, and it brings me back to my initial question - what are we working toward? Given these thoughts - AI is not capturing the hearts and minds of the masses to the extent that any negatives are far outweighed by the positives.

So until the singularity becomes (safely) viable, the tech industry needs a new near-term technology target. It needs to be something achievable, something maybe already here, and just in need of investment capital. The tech industry does not invest in basic science without a near-term return to justify the investment?

So why not divert some of the investment capital now flowing into a readily viable investment - that solves real-world problems and is highly accountable - kind of the opposite of AI. What Gilder may be doing is challenging our greatest thinkers (writers, entertainers, and intellectuals) to consider alternative mental models to the scary AI future, and create a future technology vision the public can embrace and emotionally invest in. Something that investors will want to invest in.

The idea of promoting Blockchain, a highly accountable, single source of truth public database could, even if only in the minds of the masses, be used to keep AI in check. Who knows if it can or if it will, but that’s a better vision than AI going forward wholly out of control. So it begs the question, can a single source of truth, as promised in blockchain technology, Make AI, safer and more accountable to its human creators? Who knows if it can prevent deathtrap cars, killer robots, and assassin drones. Autonomous AI just feels like a wrong next step. HI - Human Intelligence with Blockchain “feels” safer - at least for now.