Feeds:
Posts
Comments

Archive for June, 2024

Netflix has announced the first two locations for their new in person venues: King of Prussia, Pennslyvania and Dallas, Texas.

The first thought that most people are going to have when hearing this is that Netflix just invented the movie theater. We already have a place to go to watch movies and the whole point of Netflix is to stream content from the comfort of your living room without having to go out. And, you know, “Netflix and chill”. So, what are they getting at here?

Well, the venues aren’t even designed to be simple in person theaters. Rather, something much more than that. With the goal being to drum up interest in Netflix’ properties.

Variety explains what the Netflix Houses are all about:

“They’re not exactly theme parks, but the new Netflix Houses — to open next year in King of Prussia, Pa., and Dallas — will feature a wide array of shopping outlets, eateries and experiential activities tied to the streamer’s major franchises like ‘Bridgerton,’ ‘Stranger Things’ and ‘Squid Game.’…

Outside each Netflix House, you’ll see sculptures and a mural mash-up of characters from popular Netflix titles, according to the company. As examples of what to expect at the brick-and-mortar complexes, the company says, ‘Imagine waltzing with your partner to an orchestral cover of a Taylor Swift song on a replica of the ‘Bridgerton’ set –– and then walking around the corner to compete in the Glass Bridge challenge from ‘Squid Game.” Visitors can then visit a restaurant ‘with food inspired by Netflix shows from around the world’ and then browse a store with merch such as ‘that Hellfire Club T-shirt you’ve always wanted’ from ‘Stranger Things.'”

So, we’re talking about recreating sets, reenacting moments, purchasing merchandise, and eating food inspired by the content and locations. An immersive Netflix themed experience that doesn’t even seem to involve watching the actual shows. It’s another example of the recent trend emphasizing in person experienes such as AirBnB’s Icons, a trend that people like Sam Altman think is going to explode:

The theory being that as AI becomes more prevalant, it’ll take things away from us, making certain jobs or industries obsolete, but it’ll also lead to new things, new product categories as society shifts and one of those shifts could be towards more human connection, more fantastical, larger than life in person experiences.

Count me in.

Are Netflix Houses the Greatest Idea Ever?

Read Full Post »

Elon Musk has a history of making bold claims that are far-fetched or at the very least overly optimistic. His time frames for when we’ll get to Mars or have fully autonomous self-driving cars have been way off and such bold claims, as in the case of self-driving, have been dangeorus.

His latest wild claims have centered around the Tesla Bot and just how many we’ll have and how valuable that’ll make Tesla. According to Musk he sees the market for humanoid robots to be a billion units per year with Tesla capturing 10% of that market for a trillion dollar profit. The Tesla Bot will certainly be a major player in the humanoid robot space thanks to Musk’s popularity and his legions of fanboys who will buy anything he sells whether that’s a flamethrower or a hideous Cyber Truck. But there’s certainly going to be plenty of competition out there when it comes to humanoid robots that live and work among us.

A few weeks ago a video from a Chinese factory went viral and for good reason as it showed an army of Westworld looking humanoid robots under construction:

And that’s just the tip of the synthetic iceberg.

New Atlas explains:

“The AI-powered humanoid robot space is starting to get almost as crowded as the cereal aisle at your local supermarket. Last month alone, we were treated to two impressive offerings from OpenAI. One, a laundry-folding bot from Norwegian collaborators 1X that showed off impressive “soft-touch” skills, and the other a bot from collaborators Figure that demonstrated truly next-gen natural language reasoning ability. Then this month, Boston Dynamics blew us away with the astounding dexterity embedded in its new Atlas robot and China’s UBTech impressed its soft-touch speaking bot, Walker S. And the list goes on.

But today’s video showing off the skills of an AI-powered bot known as S1 from a relatively unknown Senzhen-based subsidiary of Stardust Intelligence called Astribot truly gave us the chills. It’s fast. It’s precise. And it’s unlike anything we’ve seen so far.

According to Astribot, the humanoid can execute movements with a top speed of 10 meters per second, and handle a payload of 10 kg per arm. The fact that its website shows that an adult male falls well short of both of these and other Astribot metrics shouldn’t be cause for alarm at all. That speed, as the video shows, is fast enough to pull a tablecloth out from under a stack of wine glasses without having them come crashing to the ground.

But the bot is not only speedy, but also incredibly precise, doing everything from opening and pouring wine, to gently shaving a cucumber, to flipping a sandwich in a frying pan, to writing a bit of calligraphy. The video also shows that the robot is very adept at mimicking human movements, which means it should be a good learner.”

You can check out a video of S1 below:

With so many companies interested in developing this technology and with the AI needed to power them advancing quickly it’s inevitable that we’re going to have humanoid robots in our lives within the next decade. They’ll live in our homes (as in the case of Astribot’s S1), work in our factories (Tesla bot), and eventually interact with us (in China). The race to develop the best one is officially underway.

The AI powered humanoid robots are coming.

Read Full Post »

What do you do if you get stuck while playing a video game? Give up? Look up cheat codes? Look up tutorials online or read the instruction manual? Ask a friend to help? Well, in the future you can just ask your gaming AI assistant thanks to Nvidia’s Project G-Assist.

Nvidia explains on their blog:

“PC games offer vast universes to explore and intricate mechanics to challenge even the most dedicated gamer. Many of us spend hours scouring the internet for assistance to learn and master our favorite titles. The amount of knowledge available is immense, spanning millions of guides and countless words.

Project G-Assist aims to put game and system knowledge at players’ fingertips. Simply ask a question about your game or system performance, and the AI assistant provides context-aware answers.

Project G-Assist takes voice or text inputs from the player, along with a snapshot of what’s in the game window. The snapshot is fed into AI vision models that provide context awareness and app-specific understanding for the Large Language Model (LLM), which is connected to a database of game knowledge such as a wiki. The output of the LLM is an insightful and personalized response—either text, or speech from the AI —based on what’s happening in-game.

G-Assist’s vision and language models can be customized by developers for a specific game or app, providing a high degree of accuracy and insightfulness.”

In addition to useful explanations there would also be links to direct you online to find out more information…

Activated by pressing a hotkey, or using a wake phrase, the AI Assistant can help players with questions they often find themselves looking up online: whether it’s about quests, items, lore, or difficult to tackle bosses. In ARK: Survival Ascended, it may suggest the best early game weapon and where to find the necessary materials to craft it.

Leveraging AI vision models to understand what is happening in the game window, the AI Assistant can advise whether an on-screen dinosaur should be avoided, or different approaches to taming a particular beast.

And, because the assistant is context aware, it can personalize its recommendations to the user’s playthrough: it can analyze skill points, the crafting menu and currently locked engrams, for example, and suggest what to pick next to help players conquer higher-level areas and foes.

Should you wish to learn more about a particular answer, context-sensitive links can direct you to additional information online, from the official ARK wiki in this instance.

In an RPG or level-based game, the assistant could hint at the location of secret items and missable lore, before giving you step by step instructions to some or all, if asked. In an RTS, it could share the recommended early game build order, or resource management strategies to help you get on level terms with long-time players. And in an FPS, you could instantly discover the current loadout meta, along with all recommended attachments, leveling the field and ensuring you’re not at a competitive disadvantage.”

Competitive disadvantage? With an AI powered gaming assistant at your fingertips you’ll always have the advantage. Helping you to enjoy your favorite games even more and avoid those frustrating experiences where you get stuck and want to quit.

Is Project G-Assist the Greatest Idea Ever?

Read Full Post »

A new competitor to OpenAI’s Sora just dropped and it’s ability to dream, to bring memes to life and extend them past their normal ending point in new and exciting ways or to animate famous paintings will blow your mind. Say hello to Luma’s Dream Machine. A new tool that could eventually dream up entire movies.

Mashable introduces us to this new entry:

“Dream Machine was created by San Francisco artificial intelligence startup Luma AI, known as the minds behind 3D model generator Genie.

Rather than dump its video offering behind a subscription fee, Luma AI has decided to launch a free model available for anyone to use and experiment with. There are also plans to release a developer-friendly API in the coming months—Sora, but for the masses.

AI enthusiasts immediately began pushing the limits of the new generator, including some interconnected experiments using Dream Machine to animate static AI images made using non-Luma tools like Midjourney.

A fair amount of early user examples also include eerily-moving, ‘living’ recreations of famous art, like The Girl with the Pearl Earring and Doge (rest in peace).

But beyond Harry Potter-esque living portraits, Beta testers say the tool can ‘faithfully render specified objects, characters, actions, and environments while maintaining fluid motion and coherent storytelling,’ VentureBeats reported. The company’s larger aim, it says, is to create a ‘universal imagination engine’ that could ‘dream’ up just about any video concept, including storyboarding and character supports, music videos, and eventually full-length movies.”

This is an impressive tool and the fact that it was made available for free will enable more people to jump on the AI art bandwagon and hone their skills early on. For now the focus is on how this tool compares to Sora. But the key take-away here long-term is that this could be the technology that enables us to dream up feature lengthe films from simple prompts. Making my dream of becoming a filmmaker a reality.

Is Dream Machine the Greatest Idea Ever?

Read Full Post »

AGI has finally been acheived in the form of Artificial Gerbil Intelligence. Thanks to Harvard and DeepMind creating a virtual rodent powered by AI to study how brains control movement.

As Science Daily puts it:

“The agility with which humans and animals move is an evolutionary marvel that no robot has yet been able to closely emulate. To help probe the mystery of how brains control movement, Harvard neuroscientists have created a virtual rat with an artificial brain that can move around just like a real rodent.

Bence Ölveczky, professor in the Department of Organismic and Evolutionary Biology, led a group of researchers who collaborated with scientists at Google’s DeepMind AI lab to build a biomechanically realistic digital model of a rat. Using high-resolution data recorded from real rats, they trained an artificial neural network — the virtual rat’s ‘brain’ — to control the virtual body in a physics simulator called MuJoco, where gravity and other forces are present.

Publishing in Nature, the researchers found that activations in the virtual control network accurately predicted neural activity measured from the brains of real rats producing the same behaviors, said Ölveczky, who is an expert at training (real) rats to learn complex behaviors in order to study their neural circuitry. The feat represents a new approach to studying how the brain controls movement, Ölveczky said, by leveraging advances in deep reinforcement learning and AI, as well as 3D movement-tracking in freely behaving animals.”

Due to their similar biology rats have unfortunately become synonymous with lab work. Perhaps this can be the start of using virtual rodents to conduct testing so that we won’t have to use the use versions anymore.

Is a Virtual Rodent the Greatest Idea Ever?

Read Full Post »

Google may have a new competitor and it’s not OpenAI. Instead, it could be Perplextiy, which styles itself as an answer engine, not a search engine.

Explains Tech.co:

“Perplexity AI is an AI-powered conversational search engine that produces concise answers to user-generated queries. The technology utilizes LLMs like GPT-4, Claude, Mistral Large, and its own custom models for natural language processing, and searches the web in real-time to provide users with up-to-date answers.

Perplexity AI is quite literally the lovechild of Google Search and ChatGPT. The app was founded in 2022 by former Google and Open AI employees, who were frustrated by the wasted potential of the LLMs they were developing. The team wanted to make the information these models contain more accessible to the public, helping to ‘democratize access to knowledge’ as a result.

The team chose the name Perplexity because they want the platform to help users gain accurate and informative answers to questions, even if they’re complex or challenging.”

Adds Forbes:

“Instead of entering keywords and sorting through a tangle of links, users pose their questions directly to Perplexity.ai and receive concise, accurate answers backed up by a curated set of sources. Powered by large language models (LLMs), this ‘answer engine’ places users, not advertisers, at its center. This shift promises to transform how we discover, access, and consume knowledge online—and, with that, the structure of the internet as we know it today.”

You can check out a video of it below:

Personally, I’m all for democratizing knowledge and if we can improve search, removing the focus on advertisers, then all the better. I just think that it’s more likely that interacting with conversational chatbots like ChatGPT is what ultimately winds up replacing traditional search engines like Google since they have first movers advantageg and will be getting integrated into iPhones soon.

However, Perplexity’s new Pages tool could stand on its own enabling users to create entire Wikipedia style webpages on any topic from a single prompt. If you’re looking to build out a website or create material for a presentation this could be a fast and easy way to do it.

Either way it’s clear that there’s a new player in the knowledge space. One that aims to make use of all the information that’s already online. So, if you get a chance you may want to consider trying out an answer engine instead of a search engine the next time you want to find something out.

Is Perplexity the Greatest Idea Ever?

Read Full Post »

It’s one of the oldest running jokes in history. The idea that the weather man is going to get their prediction wrong. Or nowadays that the absurdly hot Latina weather woman is going to get her forecast wrong.

Usually its no fault of their own. The weather is hard to predict and could change at a moment’s notice. Our predictions are only as good as the data we collect and when dealing with weather pattens on a global scale there’s a lot of fast moving data to calculate. But we may soon have an AI that could help us make more accurate predictions. Even when it comes to air pollution.

Phys.org explains:

“A team of computer scientists at Microsoft Research AI for Science, working with a colleague from JKU Linz, another from Poly Corporation, and another from the University of Amsterdam, has built what Microsoft describes in its press release, as a ‘cutting-edge foundation model’—a system called Aurora that can be used to make global weather and air pollution level predictions more quickly than traditional systems.

Conventional computer-based weather prediction systems typically run on supercomputers because they rely on mathematical formulas that crunch massive amounts of data. More recently, several groups (such as DeepMind and Nvidia) have taken another approach: using AI-based applications that take far less time to run.

In this new effort, Microsoft, working with its research partners, has developed a weather prediction system that it claims rivals traditional systems but takes only minutes to run—and it can predict global air pollution levels as well.

Called Aurora, the system uses 1.3 billion parameters and was trained using millions of hours of data from six climate and weather models. It can make 10-day predictions for any part of the world. It can also be used to predict the size and severity of unique weather events, such as hurricanes.

Microsoft describes it as a system made with ‘flexible 3D Swin Transformers, with Perceiver-based encoders and decoders.’ The technology allows it to use a wide variety of atmospheric data, such as wind speed, air pressure, temperature and even greenhouse gas concentrations. And that, the researchers claim, allows the system to discover patterns that would not be seen otherwise—patterns that can lead to predictable outcomes.

Unique among its capabilities is Aurora’s ability to predict air pollution levels for any given urban area around the world—and to be able to do it so quickly that it can serve as an early warning system for places that are about to experience levels of dangerous pollutants.”

Personally, I love the name Aurora. Referencing the natural phenomeon it’s perfect for a weather based system. I just hope that it actually works. More accurate weather forecasts isn’t just a nice to have. With the climate changing and there being more hurricanes, tornadoes, flash floods, forest fires, atmostpheric rivers, polar vortexes, and other fast moving extreme weather events popping up we need more accurate information to help us warn people if they’re in risk and if the air outside is even breathable and not just let them know if they need to bring an umbrella or wear a hat.

With Aurora being x5000 faster than a supercomputer at making predictions we might finally have it.

Is Aurora the Greatest Idea Ever?

Read Full Post »

People are inherently short-sighted and selfish. We know we should exercise more and eat well but we also want that piece of chocolate cake right now. Our future self is just as much a stranger as anyone else and so we put their thoughts and feelings out of our minds and continue to indulge. It’s why it’s so hard for people to make smart decisions and enact Climate Change policies that could benefit future generations and our futures selves. We’re only concerned with our current selves. But talking to a future version of ourselves could help change that. Motivating us to act better now.

Singularity Hub sums it up best:

“Chatbots are now posing as friends, romantic partners, and departed loved ones. Now, we can add another to the list: Your future self.

MIT Media Lab’s Future You project invited young people, aged 18 to 30, to have a chat with AI simulations of themselves at 60. The sims—which were powered by a personalized chatbot and included an AI-generated image of their older selves—answered questions about their experience, shared memories, and offered lessons learned over the decades.

In a preprint paper, the researchers said participants found the experience emotionally rewarding. It helped them feel more connected to their future selves, think more positively about the future, and increased motivation to work toward future objectives.

‘The goal is to promote long-term thinking and behavior change,’ MIT Media Lab’s Pat Pataranutaporn told The Guardian. ‘This could motivate people to make wiser choices in the present that optimize for their long-term wellbeing and life outcomes.’

Chatbots are increasingly gaining a foothold in therapy as a way to reach underserved populations, the researchers wrote in the paper. But they’ve typically been rule-based and specific—that is, hard-coded to help with autism or depression.

Here, the team decided to test generative AI in an area called future-self continuity—or the connection we feel with our future selves. Building and interacting with a concrete image of ourselves a few decades hence has been shown to reduce anxiety and encourage positive behaviors that take our future selves into account, like saving money or studying harder.

Existing exercises to strengthen this connection include letter exchanges with a future self or interacting with a digitally aged avatar in VR. Both have yielded positive results, but the former depends on a person being willing to put in the energy to imagine and enliven their future self, while the latter requires access to a VR headset, which most people don’t have.

This inspired the MIT team to make a more accessible, web-based approach by mashing together the latest in chatbots and AI-generated images.”

Apologies to all the AI girlfriends out there but the future me might be my new favorite chatbot. And it’s not just because it’s about me. There’s defintely real value here in talking to a future version of yourself to help you make better financial decisions, eat healthier, make better choices, and take better care of yourself. Helping to bridge the temporal gap between future value and maximizing current present day value.

Is the Future You chatbot the Greatest Idea Ever?

Read Full Post »

It’s been described as the Netflix of AI and it could soon let you create your very own shows. Or at least new animated episodes of existing ones.

As the Hollywood Reporter puts it:

“Generative artificial intelligence is coming for streaming, with the release of a platform dedicated to AI content that allows users to create episodes with a prompt of just a couple of words.

Fable Studio, an Emmy-winning San Francisco startup, on Thursday announced Showrunner, a platform the company says can write, voice and animate episodes of shows it carries. Under the initial release, users will be able to watch AI-generated series and create their own content — complete with the ability to control dialogue, characters and shot types, among other controls.”

Adds Indie Wire:

“A new app wants to become the ‘Netflix of AI,’ allowing users to watch one of its AI-generated animated series on-demand. Not yet moved? The Showrunner app then allows the viewers to use AI to create their own episodes of the show.

AI production company Fable Studios has today launched the app in its Alpha stage to the public. With it, they’re releasing the first two episodes of an animated series created with Showrunner’s AI tools, a tech-industry satire called ‘Exit Valley’ that looks quite similar to the animation style seen on ‘South Park‘ or ‘Rick and Morty.’

The first episode, which you can watch here, imagines Gold Rush-era ancestors of Mark Zuckerberg, Eduardo Saverin, and the Winklevoss twins fighting to the death over the valuable element in a satirical take on the cutthroat battle over Facebook. Even their ‘South Park’ is…not quite “South Park.” And we’re not just talking about the animation.

The studio is targeting 22 episodes for a first season. Viewers can input prompts to generate their own episode of the series, selecting the characters, storylines, and shot types. The best user-prompted episodes could one day even become part of the series’ official canon.

‘The next Netflix won’t be purely passive; you will be at home, describe the show you’d like to watch and within a minute or two start watching,’ Fable Studios CEO Edward Saatchi said in a statement. ‘Finish a show that you enjoy and make new episodes, and even put yourself and your friends in episodes — fighting aliens, in your favorite sitcom, and solving crimes.'”

I haven’t gotten involved with Generative AI tools yet despite my obvious interest in AI. For now I’m content to write about everything happening rather than dabble with the tools myself. What I’ve been waiting for is something like Showrunner. Something to enable me to bring all of my TV show and movie ideas to life rather than just generate creative images. My interest lying more in film making and entertainment than pure art.

If Showrunner AI is focusing on animated series to start out then it seems like I’ll have to wait a little big longer before this same concept can be supported by live action content. But if that happens then we might truly have a Netflix of AI moment. A future in which we create the content that we consume instead of just passively digesting what Hollywood throws at us. Allowing us to let our imaginations run wild and live out our wildest fantasies.

Is Showrunner AI the Greatest Idea Ever?

Read Full Post »

When you think of the name Steve who do you think of? Steve Jobs? Steve Carrell? Stone Cold Steve Austin? Sandwich eating Steve from Full House?

Well, in the future you could be thinking of an AI politician in the UK.

According to Wired, “AI Steve, an avatar of real-life Steven Endacott, a Brighton-based businessman, is running for Parliament as an Independent.”

Now, this isn’t the first time this has happened. That an AI entity has tried to run for office. It’s happened before in a mayorial race in Japan with Michihito Matsuda and even with an enitre Danish political party. And it’s an idea I’ve had since 2015. But has the time finally come for an AI politican to win an election? AI Steve will look to become the first.

Wired explains:

“AI Steve was designed by Neural Voice, an AI voice company of which Endacott is the chair. According to Jeremy Smith, the company’s cofounder, AI Steve can have up to 10,000 conversations at once. ‘A key element is creating your own database of information,’ says Smith. ‘And how to inject customer data into it.’

The idea for AI Steve came from Endacott’s own frustration with trying to enter politics in order to advocate for issues he cared about. ‘I’m very concerned about the environment. We need a lot of change in government to actually help control climate change,’ he says. ‘The only way to do that is to stop talking to the outside and get inside the tent and start actually changing policy.’ When Endacott attempted to stand for office in years past, he said he felt like it was all about party jockeying and worrying about which seats or districts were “safe” rather than responding to the needs of real people.

AI Steve, he claims, will be different. AI Steve will transcribe and analyze conversations it has with voters and put issues of policy forward to ‘validators,’ or regular people who can indicate whether they care about an issue or want to see a certain policy enacted.”

Personally, I would be willing to hand the keys to the city over to an AI to make our decisions for us. Clearly, we’re not doing much better on our own and our current political system is polarizing and divisive. At the very least we could be using AI to parse through vast quantities of data about our electorates and find out what people really want like AI Steve plans to do. Being able to carry on hundreds of conversations with concerned citizens as the same time could be very useful as well.

Now, I get that some people won’t want to vote for an AI. After all, AI has its own trust issues. Just ask Elon Musk. Wait a second. Come to think of it that actually makes AI perfectly suited for politics. It’ll fit right in.

Is AI Steve the Greatest Idea Ever?

Read Full Post »

Older Posts »