Point.EArtificially intelligent systems are slowly taking over the creative industry. Machines are now performing tasks that are previously conducted by humans, and many processes involve repetitive and simple movements. But artificial machines are creating a lot of debates and controversies in the creative industry. Artists believe that AI is taking over an artist’s livelihood and destroying art and creativity. Amid this chaos, OpenAI is still launching new art generators like Point.E, which is an open-source project that generates 3D images from text prompts.

With its DALL.E program, which is similar to competitor initiatives Stable Diffusion and Midjourney in that it can produce realistic or fantasy graphics from descriptive text, the AI research company has garnered a lot of interest.

Point.E uses a distinct machine learning model called GLIDE even though it uses the same bullet point symbol as OpenAI's DALL-E branding. And right now, it's not quite as powerful. Point.E creates a low-resolution point cloud—a collection of points in space—that mimics a traffic cone when given a text direction like "a traffic cone."

The final product falls well short of a 3D rendering used in a movie or video game. But that's not how it works. Point clouds are a middle phase; after being imported into a 3D program like Blender, they may be transformed into textured models that resemble more recognizable 3D images.

Point.E claims that it is that it "generates point clouds efficiently"; this is where the "E" in this case comes from. Unlike cutting-edge techniques, which need numerous GPU hours to complete a rendering, it can construct 3D models utilizing only one to two minutes of GPU time. According to one assessment, it is 600 times quicker than Google's DreamFusion text-to-3D model.