877-924-1414

I just completed the second Webinar in a series called the AI Revolution. These are the top questions that most people have about this amazing tool that is re-shaping how we do business:

  1. How does AI work?
  2. What are the legal issues due to possible plagiarism & copyright infringement?
  3. What are the ethical concerns over using AI Content?
  4. Does Google devalue content written by AI?
  5. How accurate and reliable is AI content?
  6. Is AI going to replace me?

These are all great questions and things you should understand before simply using AI in your work. Yesterday’s webinar (and this article) are intended to help overcome much of the misinformation that is being spread about AI. AI is not just about replacing human labor or streamlining processes. It is about creating entirely new opportunities and driving business innovation. This really is a new era, and many traditional online businesses may be left behind if they do not embrace this technology. At the same time, if you don’t understand the technology, you could also get burned.

Let’s take a look at each of these questions and see if we can make sense of all this from a non-technical standpoint.

Background

In addition to conducting the webinars I mentioned, and writing my most recent book, “Written by a Robot,” I began developing computer software back in the 1970’s. One of the languages I became proficient in was LISP (List Processing). LISP has been widely used in the field of artificial intelligence (AI) since its inception. In fact, it has been called the “language of AI.” This is because LISP provides a flexible and powerful tool for manipulating symbolic expressions, which is a key part of many AI techniques.

One of the main advantages of LISP is its ability to handle symbolic data structures such as lists and trees. This makes it well-suited for tasks such as natural language processing, where words and sentences can be represented as symbolic structures. Another advantage of LISP is it’s support for dynamic programming. This means that LISP programs can modify their own behavior at runtime, allowing for more flexible and adaptive AI systems.

What all this means to you is that I have some knowledge about AI systems and how they work. I can explain to you how AIs such as ChatGPT are able to generate unique content. However, I am not an attorney, so nothing I say is legal advice.

How does AI work?

You don’t need to know how a camera works to take photos. But if you understand aperture, shutter speeds, zoom length, selective focus, and composition you will take better images. The same is true with AI. If you know how it works, you can use it better.

AI is trained to recognize patterns and make decisions based on data. Unlike traditional software, which is programmed with specific rules and functions, AI uses algorithms to learn from data and improve over time. In other words, it can adapt and evolve on its own, without being explicitly programmed to do so. Pattern recognition is key to how AI software works.

For example, in the image above, AI software can recognize which fruit is oranges, whether it is whole or cut in half. By knowing what an orange looks like, it can create new images of oranges and place them in other images. This is essentially how text works as well. ChatGPT recognizes various patterns in text, such as grammar rules, syntactical structures, and semantic relationships between words and phrases.

When ChatGPT is asked to generate new written content, it uses its understanding of text patterns to construct a response that is coherent and relevant to the input it receives. This is done by analyzing the input text and identifying key concepts, themes, and patterns. It then uses this information to generate new text that is consistent with the style, tone, and meaning of the input text.

ChatGPT is trained to predict the next word or phrase in a given context. To do this, it is fed a large data set of human generated text, which it uses to learn the patterns and structures of natural language. It does not copy the text, but simply uses it as a pattern.

Think of it this way: when you ask Google to look something up, you probably know that it doesn’t go out and scout the entire web for answers right then. Rather, it has already indexed those pages and simply retrieves pages from its database that match your request. Essentially, there are two main phases: the crawling and data gathering phase, and the query phase.

ChatGPT works roughly the same way. The data-gathering phase is called pre-training. The data is then fed into a type of software called a a transformer, which is a type of neural network used for processing natural language. This is how GPT gets its name, which stands for Generative Pre-trained Transformer.  The heavy lifting is all done during the pre-training phase.

The user interaction phase is called inference. The technology works by breaking down language inputs, such as sentences or paragraphs, into smaller components and analyzing their meanings and relationships to generate insights or responses. ChatGPT is programmed to produce the next symbol in a string of symbols that is the most likely complement, or continuation, of the symbols it is fed by a human at the command prompt.

What are the legal issues due to possible plagiarism & copyright infringement?

Based on what we just saw about how AI software works, the output from generative AI software like ChatGPT is unique and does not directly plagiarize any specific work. It uses a much larger data set than humans would when doing research, making it less likely to to produce work that can be contributed to a single source. The works are not derivative or simply article spinners; the output is truly unique. Immediately writing off AI as plagiarizing content means you don’t understand how AI is generating content in the first place. Anyone who tells you otherwise and claims that ChatGPT is plagiarizing content is making up fake news.

That said, the law has not caught up with technology, so there is no final solution or answer. ChatGPT is not plagiarism insofar that it is not copying the work of someone else outright. But, generative technology uses human-generated text as the basis for its training. It uses human-generated images or photos for creating new images. This image may help you understand this better. It is a combination of a dog, cat and mouse to create an entirely new image that did not previously exist. No one has ever created a photo of such an animal, but the software used it pattern recognition abilities to generate the image.


That’s pretty cool – if this doesn’t infringe on anyone’s intellectual property (IP), can I then copyright the works created by AI and claim them as my own since I’m the one who input the prompt? The case of Naruto vs Slater established that only people, not monkeys or robots, may possess copyrights. This implies that any work developed solely by AI falls within the public domain and may be used by anybody without authorization.

So, the answer appears to be no. If you develop a work based on a work in the public domain, you may only claim copyright for your unique contributions. You cannot claim copyright on a work in the public domain. AI generated text, images, music, videos, etc. are public domain. Only if you significantly modify the content can you claim a copyright based on the modifications.

Which of course leads us to the next concern: is using AI generated content ethical?

What are the ethical concerns over using AI Content?

This question primarily stems from academia, with students turning in essays written by AI. In many cases, the AI created essays have scored very well, even at the graduate level. This causes a deep concern over using AI in the classroom.

But it obviously applies to the business world as well. AI is a great tool for doing research, generating ideas, and speeding up the writing or graphic process. The problem comes with claiming it as your own. My personal recommendation is to always use it as a guideline, then add in your own experiences and creativity to make it yours. If you use AI generated content for the majority of an article, paper, book, etc, you should give credit where credit is due and cite the AI tool as a reference.

And, if you have concerns over using AI generated content, don’t use it. Go back to doing your research on your own taking hours and hours to compile your information while your competitors leave you in the dust.

Does Google devalue content written by AI?

I’m starting to get tired of people telling me that Google will not rank content that is generated by AI. Yes, Google has tools that can detect if content was created by ChatGPT (although such tools are not completely accurate as evidenced by the fact that GPTzero thinks the US Constitution was written by AI.)

The fact is that standard plagiarism detection methods don’t work because AI is creating entirely new pieces of work as we saw earlier. The problem for those setting and assessing academic writing tasks, is that even the most up-to-date plagiarism checkers cannot keep up. Attempting to do so would be “a futile computational arms race” much like trying to keep up with hackers. It is never going to be perfect.

Which leads us to Google devaluing content created by AI. As of January 11, 2023, SEO.ai reports that Google no longer states they are against all automatically generated content including AI generation. Using AI content generation tools is not against Google’s policies.

The truth is that AI content generators scroll through thousands of documents to absorb the information. By viewing all these documents, generators actually help improve search engine optimization.

View this article on Spiceworks for further information: https://www.spiceworks.com/marketing/content-marketing/news/ai-generated-content-not-against-google-policies/

How accurate and reliable is AI content?

One of the primary concerns when it comes to AI writing articles is the accuracy and reliability of the output. This is a legitimate concern.  AI systems are designed to generate content based on patterns and data sets, and if the data is not properly verified or updated, the AI system may produce inaccurate articles or images.

For example, when I told ChatGPT to “act as if you are Greg Jameson and write an introduction for yourself,” it made up (or hallucinated as the AI industry calls it) information about me that was not true, because the logical pattern would be that if I was an Inc 500 award winning entrepreneur who has spoken all over the world, surely I must have been featured in Forbes magazine. Sadly not true (at least yet). This is why you MUST edit the output as I mentioned earlier.

If you are using AI to create factual pieces, always cite your references. And always check to make sure it is accurate.

Is AI going to replace me?

Because of my name being associated with using AI for ecommerce, I have been dubbed as some kind of expert on the topic. And various people have tagged me as some kind of devil as a result of advocating these tools. The truth is I really don’t know where we are headed with this. It is going to be exciting to see how this infolds over the next few years, both from a technical standpoint and a legal standpoint.

I have found some of the stories that have been shared with me both interesting and concerning. For example, there was a guy who was concerned about ChatGPT replacing him as a writer, but excited about how MidJourney means he doesn’t have to pay for images anymore. Kind of a double standard.

My son-in-law is a graphic artist and does not feel threatened by AI, but I have another friend whose daughter is an artist and she is terrified. The reality is that new jobs will be created as old ones are replaced. This has always been the case as new technologies are introduced.

It is unlikely that AI will replace humans completely in most industries, as many tasks still require human creativity, critical thinking, and empathy. But make no mistake, AI is here to stay and it will only continue to evolve. If you want to remain competitive, you must learn how to maximize its potential.

Note: the images in this article were generated by AI using  MidJourney.