If you are old enough, you might remember when personal computers first came out. Sure, we’d had computing machines for years before that that used vacuum tubes, punch cards and magnetic and paper tape. They took up the entire basement of buildings. But the PC revolutionized the world. And while people flocked to this innovation, many were scared, sure that it would ruin the world. Softwares like spreadsheets were going to eliminate accounting jobs. Computer Aided Design software like AutoCAD was going to put architects and engineers out of work. Interestingly, the opposite happened: these softwares created more jobs and more opportunities.
Then, we got the internet. Computers all over the world were now connected and again, people flocked to this innovation. Yet people were concerned that by having everything connected, it would spell the end of the earth. In fact, at the turn of the century, much of the world was convinced that Y2K would mean that airplanes would fall out of the sky, banks would fail, and the power companies would create a type of Armageddon. All because of a programming glitch based around how we handled dates in the software, further exacerbated by the fact that now all these computers were connected. In the end, rather than causing the end of the world, the internet flourished, and today it is a part of our daily lives.
Then came the cell phone. Again, we’d had portable phones for years, although you might have had to carry it in a briefcase or have it permanently attached to your car. When cell phones first came out, they did one thing: make phone calls. You might be able to text if you were willing to push a button multiple times to create a letter such as pressing the number 7 four times to make an “S”. Then Samsung manufactured a chip that allowed Apple to introduce the original iPhone, which contained not only a phone, but an MP3 player and a camera.Again, cell phones have now become part of our daily lives, and most of us can’t imagine life any other way.
Like these other innovations, AI has been around for years. Even Generative AI has been around for six decades. In 2015, Autodesk began publishing research on Project Dreamcatcher, a generative design tool that uses algorithms to create new designs. Users can describe intended properties such as materials, size and weight. In 2018, OpenAI released GPT (Generative Pre-trained Transformer). Trained on about 40 gigabytes of data and consisting of 117 million parameters, GPT paved the way for subsequent LLMs in content generation, chatbots and language translation. But in November of 2022, in just a matter of months, the launch of ChatGPT has catapulted generative AI into the social consciousness. Many consider 2023 to be the year of AI as over 50% of companies started using AI in their businesses. People who had never heard of the term a year ago were suddenly proclaiming themselves as experts.
When examining historical accounts, such as those in history textbooks featuring illustrations and monochrome photographs of 19th-century Californian gold seekers, we observe a timeless human trait: the pursuit of prosperity and the willingness to venture into uncharted territories. This spirit persists, but the context has evolved significantly. Today, we stand at the forefront of a modern, sophisticated, and globally encompassing “gold rush” – one that is focused on the realm of artificial intelligence (AI). This new era is marked by a rapidly advancing AI technology, which is attracting immense interest worldwide. The AI revolution is set to open up unprecedented opportunities, reshaping industries, education, healthcare, and various other facets of human endeavor in ways we are just beginning to imagine.
At the same time, many people are scared that AI will mark the end of the world. The fears surrounding AI, such as job displacement, warfare, political manipulation, and apocalyptic scenarios, are often exaggerated. Here’s why these extreme outcomes are unlikely: