Meet Black Forest Labs, the startup that powers Elon Musk's crazy AI image creator.

August 15, 2024
Brian

Elon Musk's Grok launched a new AI image-generation tool on Tuesday night, which, like the AI chatbot, offers minimal safeguards. This means you may create false images of Donald Trump using marijuana on the Joe Rogan show, for example, and publish them directly to the X platform. However, it is not Elon Musk's AI business that is driving the lunacy; rather, the controversial feature is being developed by a new startup called Black Forest Labs.

The relationship was unveiled when xAI stated that it is partnering with Black Forest Labs to fuel Grok's picture generator with its FLUX.1 model. Black Forest Labs, an AI picture and video firm that began on August 1, looks to sympathize with Musk's goal for Grok as a "anti-woke chatbot," albeit without the rigorous guidelines found in OpenAI's Dall-E or Google's Imagen. The social networking platform is already overflowing with crazy photographs from the new feature.

Black Forest Labs is situated in Germany and recently emerged from stealth mode with $31 million in seed funding led by Andreessen Horowitz, according to a news release. Other noteworthy investors include Garry Tan, the CEO of Y Combinator, and Brendan Iribe, the former CEO of Oculus. Robin Rombach, Patrick Esser, and Andreas Blattmann, the startup's co-founders, were previous researchers who contributed to the development of Stability AI's Stable Diffusion models.

According to Artificial Analysis, Black Forest Lab's FLUX.1 models outperform Midjourney's and OpenAI's AI picture generators in terms of quality, as rated by users in their image arena.

The business claims to be "making our models available to a wide audience," with open source AI image-generation models available on Hugging Face and GitHub. The startup also hopes to establish a text-to-video approach soon.

In its launch statement, the company claims to "enhance trust in the safety of these models"; nevertheless, some may argue that the deluge of AI-generated photographs on X Wednesday did the opposite. Many images that users were able to produce using Grok and Black Forest Labs' tools, such as Pikachu clutching an assault rifle, could not be recreated using Google or OpenAI image generators. There is no doubt that copyrighted imagery was used throughout the model's training.

That's sort of the point.

This lack of safeguards is likely one of the primary reasons Musk chose this partner. Musk has made it clear that he feels safeguards make AI models less safe. "The danger of training AI to be woke — in other words, lie — is deadly," Musk wrote in a tweet in 2022.

Anjney Midha, Black Forest Labs' board director, shared on X a series of comparisons between photos generated on the first day of launch by the Google Gemini and Grok's Flux partnership. The topic focuses on Google Gemini's well-documented faults in creating historically correct photos of people, notably the inappropriate injection of ethnic diversity into images.

"I'm glad @ibab and team took this seriously and made the right choice," Midha tweeted, pointing to FLUX.1's apparent avoidance of the issue (and noting the account of xAI head researcher Igor Babuschkin).

Because of this error, Google apologized and disabled Gemini's capacity to generate photos of people in February. As of now, the firm still does not allow Gemini to generate photos of humans.

A firehose of falsehoods.

This widespread lack of safeguards may generate issues for Musk. The X platform received fire after AI-generated deepfake explicit photographs of Taylor Swift became viral on the internet. Aside from that instance, Grok creates hallucinated headlines that appear to people on X almost regularly.

Just this week, five secretaries of state begged X to stop spreading disinformation about Kamala Harris. Earlier this month, Musk reshared a video that employed AI to clone Harris' voice, giving the impression that the vice president acknowledged being a "diversity hire."

Musk appears to be bent on allowing falsehoods like this to spread over the network. By allowing users to upload Grok's AI photographs, which appear to be watermarked, straight on the site, he has effectively created a firehose of misinformation aimed at everyone's X newsfeed.