877-924-1414

Understanding the mechanics behind artificial intelligence’s ability to produce text, images, music, videos, and even code can empower you to harness its full potential. Recognizing its similarities to human learning processes can prevent misconceptions of mere imitation.

Firstly, let’s delve into some foundational concepts. Two primary paradigms dominate the machine learning landscape: Supervised and Unsupervised Learning.

  • Supervised Learning: Model learns from labeled data to predict outcomes for new data.
  • Unsupervised Learning: Model explores unlabeled data to find patterns or structures without specific guidance on what the output should be.

Both paradigms have their own strengths and are suited for different types of tasks. The choice between them often depends on the nature of the data available and the specific problem being addressed.

Here’s a description of their differences:

Supervised Learning:

Supervised learning involves extracting patterns from known examples and using that extracted insight to engineer a repeatable outcome. Long before machine learning, humans used this same technique to reverse engineer everything from gunpowder weapons to automobiles.

The Japanese car manufacturer Toyota, for example, used this approach to design its first car prototype. Rather than design a unique process from scratch,  the first Toyota was reverse engineered from three existing car designs: the 1936 GM Chevrolet’s engine, the 1936 Ford Model 68’s chassis, and the 1936 Chrysler De Soto’s design. Toyota created its first vehicle prototype after disassembling these cars in the corner of their family-run loom business in Japan. By observing the finished car (output) and pulling apart its individual components (input), Toyota’s engineers unlocked the design process kept secret by America’s auto makers. They then took what they learned from multiple sources of input to develop the first Toyota Model A1.

This method of understanding a known combination of inputs to create a known output is replicated in machine learning using supervised learning. The machine learning model analyzes and deciphers the relationship between input and output data to learn the underlying patterns. In the case of AI art, the algorithm looks at images that are labeled with text descriptions such as “car”, “Toyota”, “building a car”, “wooden car frame”, “front of the car”, etc., and with more data and more experience, it begins to learn what constitutes a car and other objects based on millions of training examples. In the case of Midjourney, more than 650 million pieces of data were initially used as input. That is what is meant my giving the computer the ability to learn without being explicitly programmed.

Obviously, if you were a human and tasked with looking at this amount of data before creating your own unique output, this would be a monumental task, but a computer can do it very quickly. Most works created by humans are done in a similar manner, just with far less input data. For example, if you are writing a technical paper, you might reference a dozen or so articles written by your peers before writing up your own report.

Unsupervised Learning:

Imagine a young child named Alex who has never seen LEGO before. One day, Alex is given a large box filled with assorted LEGO pieces of various shapes, sizes, and colors. There are no instructions or pictures of finished models, just a jumble of pieces.

At first, Alex is curious and starts examining the pieces. Without being told what to do, Alex begins to naturally group similar pieces together. All the red bricks go in one pile, the blue in another, and so on. Alex also groups them by size and shape, putting all the long pieces together and all the square pieces in another group.

As Alex continues to play, patterns emerge. Alex realizes that certain pieces fit well together, while others don’t. Over time, Alex starts building structures based on these discovered patterns, like towers, bridges, and houses, even though no one showed Alex what a LEGO tower or bridge should look like.

With more playtime, Alex becomes more creative, combining pieces from different groups to create unique structures. These structures are entirely Alex’s own creations, born from experimentation and discovery.

In machine learning, unsupervised learning is similar to Alex’s exploration with the LEGO box. The algorithm is given a lot of data without specific labels or instructions. Just like Alex grouped LEGO pieces, the algorithm groups data based on similarities and patterns it identifies on its own. For instance, in a dataset of various animals, the algorithm might group them based on features it deems significant, like animals with wings or animals that swim, even if it doesn’t “know” what birds or fish are.

The goal is to uncover hidden structures and relationships in the data. And just as Alex discovered new ways to combine LEGO pieces, unsupervised learning can lead to novel insights and understandings about the data it processes.

What About ChatGPT?

Before the unsupervised training approach used in models like ChatGPT, many AI models were like a customer support phone system. They were trained on specific tasks with specific answers, making them limited in their understanding and flexibility. If you asked the automated phone system how to reset your password, it could probably help you. But if you asked it anything outside of what it was trained on, you would need a human operator.

As we’ve come to learn, you can ask ChatGPT (and other AI software using the OpenAI API) almost anything on any topic and it can provide a decent answer. This is the game changer in AI that has made it have such explosive growth.

Putting it all together

Imagine you’re a chef who has been trained by tasting and studying thousands of dishes from various cuisines around the world. Over time, you’ve learned the flavors, ingredients, and techniques used in each dish. Now, when someone asks you to make a meal, instead of copying a specific dish you’ve tasted before, you use your knowledge to create a unique dish that combines flavors and techniques you’ve learned.

For example, you might take the spiciness from a Thai curry, the creaminess from an Italian Alfredo sauce, and the freshness of a Mexican salsa to create a completely new dish. You’re not copying any single recipe; you’re using your understanding of flavors and techniques to create something original.

Now, relating this to ChatGPT:

As it turns out, ChatGPT uses a combination of unsupervised approach for its pre-training and a supervised approach for fine-tuning its output. It is much like that chef. It has “read” and “studied” vast amounts of text from the internet. When you ask it a question or give it a prompt, it doesn’t just recall a specific sentence or paragraph it has seen before. Instead, it uses its understanding of language, context, and the information it has learned to generate a unique response.

Just as the chef doesn’t reproduce an exact dish but creates a new one based on their knowledge, ChatGPT crafts its answers based on patterns it has learned, without directly copying from its training data.

In essence, ChatGPT’s ability to generate unique content comes from its extensive training, where it has learned the “flavors” and “techniques” of language, allowing it to “cook up” original responses every time.

Creating Better Prompts

By understanding how this works, you should now be able to create better prompts to give you the type of output you are looking for. It is important to know however, that current AI models only use the data it has been trained on and does not store new information you feed it except as it related to the current prompt.

For instance, if you provide ChatGPT with specific examples, like excerpts from blog posts or descriptions of writing styles, it can use that immediate context to shape its responses. You might give it a snippet of text and ask it to continue in that style, and it will attempt to generate a response that matches the tone, style, and content of the provided snippet. ChatGPT uses the provided examples only for the immediate context of that conversation. It doesn’t “learn” or “remember” the content beyond the current session. This is by design, to help ensure user privacy, so if you want to “train” ChatGPT to “write like me”, you must feed it examples each time you start a new session.

The quality of output from ChatGPT can be influenced by the nature of the prompt, but whether a long, detailed prompt or a short, direct prompt is “better” often depends on the specific context and what you’re aiming to achieve. Here are some considerations:

  1. Clarity and Specificity: Regardless of length, a clear and specific prompt is more likely to yield a relevant and accurate response. If the model isn’t given enough context or if the question is ambiguous, it might generate a more generic or off-target answer.
  2. Detail for Context: For complex topics or nuanced questions, a longer, more detailed prompt can provide the necessary context for the model to generate a more informed and relevant response.
  3. Brevity for Simplicity: If you’re asking a straightforward question or seeking a concise answer, a short and direct prompt might be more effective.
  4. Avoiding Overloading: While detail can be helpful, excessively long prompts might lead to information overload, causing the model to miss the main point or focus on less relevant details.
  5. Iterative Approach: Sometimes, it’s beneficial to start with a shorter prompt and then refine or expand based on the model’s response. This iterative approach can help narrow down the information or style you’re seeking.
  6. Guiding the Output: If you have a specific format or style in mind for the answer, guiding the model with a more detailed prompt can be beneficial. For instance, if you want a summary in bullet points or an answer in a storytelling format, specifying that in the prompt can help.

In summary, the “best” prompt length varies based on the situation. It’s often a balance between providing enough context for the model to understand the question and being concise enough to get a direct answer. Experimenting with different prompt styles and lengths can help you find what works best for your specific needs. And often, as mentioned in the iterative approach, you may need to refine your question by having a “chat” with the model to clarify or add information, much like you would when surveying a colleague or talking to another human.