What are we working toward? AI with Autonomy, or HI with Blockchain?

https://trybe.one/what-are-we-working-toward-ai-with-autonomy-or-hi-with-blockchain/?action=bp_sap_mark_read&_wpnonce=57c4e15678

https://www.youtube.com/watch?v=cidZRD3NzHg

What are we working toward? AI (Artificial Intelligence) with Autonomy, or HI (Human Intelligence) with Blockchain?

In my humble opinion, Gilder is merely challenging the current media mindset and focus on Autonomous AI as our future by introducing a new alternative mental model to the “singularity” frame held in the minds of the masses and technophiles alike. Over the past 20+ years, we have had lots of news and entertainment that have hyped AI as our medium/near-term technology future. The problem is, the picture painted by this future is not all that positive. Sure it’s convenient when GoogleVoice understands what we say and offers coherent replies. But, what happens when someday GoogleVoice decides that what we ask for is not important? Or worse, it wants to give us incorrect information - for whatever reason.

Real or not, that’s the fear. Consider a few AI related movies like iRobot, AI and West World, etc. Or worse, we all fear the HAL9000, yet these are billed as now achievable in the near/mid-term - no matter how scary. What if like in 2001, they turn out to be dangerous, how can we be sure once we let these thinking machines go off on their own. Some of our tech “leaders” (i.e., Musk and others) and the masses don’t seem to think it’s safe. And if pushed much further, without strict oversight and too quickly, might some of the masses pick up pitchforks? Does fear of AI ease or fan the flames within the current political environment?

In my opinion, AI is surrounded by fear and lots of questions like 1) how does AI make money for the masses - when it takes away everyone’s job? 2) current autonomous vehicle technologies like Tesla’s self-driving cars (whether AI or not - it feels like AI) have a poor crash record, sometimes killing the passengers in all-consuming fires. 3) sprinting Boston Dynamics AI robot-dogs (and humanoid “Terminators”) look like they could run down and kill everyone, 4) AI-powered micro-drones that fly explosive shape-charges into human skulls and blow brains out!!? That’s uber scary.

All these AI future-technologies represent the terrifying, out of control, technology future. Technology in the past was always sold as utopian and improving the lives of people, not imprisoning them. A future of killer cars, terminators, and undetectable flying micro-assassins is not hopeful. That’s insane, infringes on human rights, and it brings me back to my initial question - what are we working toward? Given these thoughts - AI is not capturing the hearts and minds of the masses to the extent that any negatives are far outweighed by the positives.

So until the singularity becomes (safely) viable, the tech industry needs a new near-term technology target. It needs to be something achievable, something maybe already here, and just in need of investment capital. The tech industry does not invest in basic science without a near-term return to justify the investment?

So why not divert some of the investment capital now flowing into a readily viable investment - that solves real-world problems and is highly accountable - kind of the opposite of AI. What Gilder may be doing is challenging our greatest thinkers (writers, entertainers, and intellectuals) to consider alternative mental models to the scary AI future, and create a future technology vision the public can embrace and emotionally invest in. Something that investors will want to invest in.

The idea of promoting Blockchain, a highly accountable, single source of truth public database could, even if only in the minds of the masses, be used to keep AI in check. Who knows if it can or if it will, but that’s a better vision than AI going forward wholly out of control. So it begs the question, can a single source of truth, as promised in blockchain technology, Make AI, safer and more accountable to its human creators? Who knows if it can prevent deathtrap cars, killer robots, and assassin drones. Autonomous AI just feels like a wrong next step. HI - Human Intelligence with Blockchain “feels” safer - at least for now.

UX DESIGN WITH ARTIFICIAL INTELLIGENCE

Elaine Lee provides an excellent introduction for designers who are thinking how they might work with AI (link below). Honestly, I wish I were working more in this realm. It makes so much more sense to be working with logic already infused with Artificial Narrow Intelligence (ANI). User-flows and experiences can be much more efficient. Users can be much happier and apps stickier. But to be on this cutting edge requires we work for large companies with ANI resources, like eBay, Amazon or Google, Microsoft, IBM, and others. ANI-capable user-flows will be available to all businesses and to their design teams in the coming years. There will be lots of trial and error in the mean time. Definitely, it’s something to think about and why not check out this excellent primer from Elaine Lee: https://uxdesign.cc/you-can-be-an-ai-designer-46a0fd45f47d

Custom Post Images

Text Line Texture by Thomas Hallgren

My first book on design thinking for children was published in 2015. It's on Amazon, the link is below. Hope you enjoy reading it to your children and grandchildren.
Stacks Image 1541
Stacks Image 1543
When confronted out of the blue by a thin red Line, our main character, Text, sets out on a fun path of self discovery. A path that leads to a new friendship along the way. For parents who love to read to their children, Text Line Texture is the perfect introduction into the world of text-only books. Children’s Book. Age range: 4-10.