“The lightning-fast advancement of AI has stirred up a frenzy of concerns regarding the lurking dangers that await us. With each passing moment, the future risks loom larger, threatening to rip apart the very fabric of our society. The menace of AI knows no bounds, as it seeks to invade our privacy, trample on our human rights, and endanger our very existence. The hour of reckoning draws near, and we must brace ourselves for the terrifying implications that lie ahead.”

That quote comes from ChatGPT itself, and many critics fear the rapid growth of AI could threaten jobs or be used for malicious purposes. AI can display biases against certain groups and can be used to create and spread misinformation. As a result many experts say AI needs regulation.

The UK’s, Secretary of science, innovation and technology Michelle Donelan is pushing for legislation to control the development of AI. She states, “Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely.”

China has already created legislation mandating that companies notify users when an AI algorithm is playing a role. The new regulations which were enacted in January affect deep-synthesis technology, including deepfakes and other types of generative AI systems were issued by the Cyberspace Administration. The rule places significant restrictions on AI-generated media, including the requirement of carrying identifiers, like watermarks.

In the EU, the European Commission has published proposals for regulations titled the Artificial Intelligence Act which would have a much broader scope than China’s enacted regulation. They include “grading” AI products according to how potentially harmful they might be and staggering regulation accordingly.

An open letter drafter by the Future of Life Institute, a nonprofit group dedicated to “steering transformative technologies away from extreme, large-scale risks and towards benefiting life,” has been signed by more than 1400 prominent high-tech experts and industry leaders including Elon Musk and Apple co-founder Steve Wozniak. The letter calls for a six-month pause for the development of safety protocols regarding the technology.

In the letter, the group notes the rapidly developing capabilities of AI technology and how it has surpassed human performance in many areas. Specifically, the letter points to the recent introduction of GPT-4, which it claims is more accurate and human-like, with the ability to analyze and respond to images.

Personally, I question the motives behind this letter. With the amount of AI that Tesla uses in its cars, and other tech companies using AI in their products, this alarm sounds a lot like jealousy of what OpenAI (the organization behind GPT-4 and ChatGPT) has been able to accomplish.

Just like we have the Can-Spam act for email, we will likely have legislation to control AI in the near future. But as ChatGPT also states:

“As AI continues to evolve, it has the potential to bring tremendous benefits to society. While there have been questions raised about the possible risks it could pose, experts believe that these concerns can be addressed through careful planning and implementation. In fact, many researchers are optimistic about the ways in which AI can enhance our lives, from improving healthcare to advancing scientific discovery. With responsible development, AI can be harnessed to unlock countless new opportunities and drive progress in ways we have yet to imagine. There is no need to fear the future of AI, as we have the power to shape its impact and reap its rewards.”

Survey Says…

With so much misinformation and fake news out there about AI, I decided to conduct a simple survey, with a single question: On a scale of 1 to 10, how do you feel about the sudden rise of Artificial Intelligence?

Given the amount of hate mail I received after conducting webinars on AI, I expected to see a lot of gloom and doom. Instead, the results were pretty encouraging. The median answer was 8.5, meaning that half of the respondents entered a 9 or above, and half an 8 or below. The mean was 7.2, or the average of all the answers was over a 7. The lowest answer any respondent gave was a 2.

Two quotes that I found encouraging were:

“When you live long enough to see the world change a number of times, you start to look for the opportunities when change is inevitable.”
“Your examples in part 1 of the Briles webinar inspired me to try it for some research I’ve been procrastinating about, and wowsa, in 12 minutes it delivered 11-12 hours of research – so I’ve saved $4000 for my 15 upcoming industry-specific books I’m publishing this fall! Many, many thanks to you!!! I’m a fan forever and happy to give you this testimonial about your AI teaching.”

So where is this all going? I don’t know (and neither does anyone else). If they claim that they do, they are making it up. What I do know is that it is going to be exciting to see how all this unfolds over the next few years, both from a technical standpoint and a legal standpoint.