Innovation Profs - 3/19/2024

Your weekly guide to generative AI tools and news

Sign up for AI Lunch Club

Want to go deeper into the subjects we talk about in our workshops? We’re doing just that in our new AI Lunch Club virtual events. These will take place over zoom from 12:30-1:30 p.m. Central:

  • April 1: GPTs and Me: Learn to make custom GPTs.

  • April 3: Go AI Yourself: Create an avatar and make it talk.

  • April 8: Advanced Image Prompting: Take a deep dive into image creation.

  • April 10: Next Steps in Prompting: Advanced prompting techniques for LLMs.

Sign up for events here. Use the code SAVE10 to save $10 off the price for each event.

Generative AI News

World’s first major act to regulate AI passed by European lawmakers

The EU AI Act was passed by the European Union’s parliament last Wednesday, establishing the world’s first regulatory framework for AI. The legislation classifies various AI applications into four levels of risk: unacceptable, high, medium, and low. Unacceptable uses of AI, which include the use of real-time remote biometric identification systems in public spaces (e.g. facial recognition technologies), are outright prohibited. More details about the legislation can be found here.

Meet Devin AI, the world’s ‘first fully autonomous’ AI software engineer

Devin, a new AI agent developed by the company Cognition, has been announced as “the first AI software engineer.” Apparently Devin has passed standard coding interviews and is capable of coding, debugging, building and deploying apps, and even training and fine-tuning its own machine learning models. According to a Cognition blog post, “Devin is a tireless, skilled teammate, equally ready to build alongside you or independently complete tasks for you to review. With Devin, engineers can focus on more interesting problems, and engineering teams can strive for more ambitious goals.”

Musk’s Grok AI goes open source

On Sunday, Elon Musk’s company xAI made its first LLM Grok AI open source, posting the parameters that define the model on GitHub. Researchers were quick to dive in to analyze the technical details of the model. “[T]he release of Grok is likely to put pressure on all other LLM providers, especially other rival open source ones, to justify to users how they are superior.”

ChatGPT Creator OpenAI Under Fire Following an Interview About Sora

In an interview with the Wall Street Journal, OpenAI CTO Mira Murati failed to answer a question about the source of the training data for OpenAI’s text-to-video model Sora. In response to the question, Murati claimed that the model “was trained on publicly available and licensed data.” When pressed to answer whether that data included videos from YouTube, Facebook, or Instagram, Murati responded that she was “not sure about that.” Unsurprisingly, Murati’s inability or unwillingness to answer the question was met with widespread criticism, particularly in light of intellectual property issues that have vexed AI companies like OpenAI.

Anthropic releases affordable, high-speed Claude 3 Haiku mode

Last Wednesday, Anthropic released the third model in the suite of Claude 3 models: Claude Haiku. Unlike much larger models Claude Opus and Claude Sonnet, Haiku is a lightweight model that is much faster than its peers. As stated in an Anthropic release announcing Haiku, “Speed is essential for our enterprise users who need to quickly analyze large datasets and generate timely output for tasks like customer support. It also generates swift output, enabling responsive, engaging chat experiences and the execution of many small tasks in tandem.”

ASCII art elicits harmful responses from 5 major AI chatbots

Another jailbreak for bypassing LLM content restrictions has been discovered. Using ASCII art, researchers have found that “chat-based large language models such as GPT-4 get so distracted trying to process these representations that they forget to enforce rules blocking harmful responses, such as those providing instructions for building bombs.” In one of their published examples, the researchers were able to show that the models OpenAI’s GPT-3.5 and GPT-4, Google’s Gemini Pro, Anthropic’s Claude 2.0, and Meta’s Llama 2 were vulnerable to an ASCII-based prompt that triggered the models to explain how to produce counterfeit currency (which they are prevented from doing via direct prompts).

Quick Hits

Tool of the week: VLOGGER

Google researchers developed an AI system called VLOGGER that starts with a single image and can generate lifelike videos of people speaking, gesturing and moving. It can match audio of someone talking as well. This is similar to what D-ID does with image to video capabilities.

Runway has a similar tool that some people now have early access to use.

AI-generated image of the week

We heard the rumors of people creating coloring books using AI and publishing them on Amazon. So we had to give it a try. We learned that the different image generation tools like very different prompts. The one below worked well for us using DALL-E 3 through ChatGPT. You can order our coloring book here.

Prompt: create a delightful coloring page for kids, featuring two dogs eating ice cream. The scene is designed with clear outlines and broad spaces, making it perfect for young artists to color. 

Generative AI tip of the week: Consistent characters

Midjourney has a new consistent characters feature that lets you create people who are consistent from one image to the next. Here’s how to use it.

Get starting with Generative AI

New to generative AI? Here are some places to start…

What we found

Ever wanted to use AI to add yourself to a movie scene? Here’s a workflow.