Feeds:
Posts
Comments

Archive for May, 2024

If you believe in a good conspirary theory here’s one for you: the Internet is dead. It’s original, organically created contents purposely replaced by AI generated content by a malnevant government hell bent on world domination in order to exert control over us.

Fast Company explains:

“There’s been a popular theory floating around conspiracy circles for about seven or eight years now. It’s called the ‘Dead Internet’ theory, and its main argument is that the organic, human-created content that powered the early web in the 1990s and 2000s has been usurped by artificially created content, which now dominates what people see online. Hence, the internet is ‘dead’ because the content most of us consume is no longer created by living beings (humans).

But there’s another component to the theory—and this is where the conspiracy part comes into play. The Dead Internet theory states that this move from human-created content to artificially generated content was purposeful, spearheaded by governments and corporations in order to exploit control over the public’s perception. 

Now, as a novelist, I love this theory. What a great setup for a tense techno-thriller! But as a journalist, I always thought it seemed pretty bonkers. That is, until recently. Lately, the Dead Internet theory is starting to look less conspiracy and more prophetic—well, at least in part.

Let me address the conspiracy part of the theory first. While all nation-states and corporations try to control narratives to some degree, it’s unlikely that any one or even group of them got together and said, ‘Hey, let’s get rid of all the human-generated content online and replace it with artificially created content.’ It would be too arduous a task and would require tens of thousands—maybe even hundreds of thousands—of people to keep their mouths shut so the public never finds out.

But the first part of the theory—that the internet’s human-created content is being replaced with artificially generated content—not only seems possible, it’s starting to feel plausible. This idea got its start sometime in the 2010s as bots became more and more prevalent on social media platforms. But an old-school bot never had the technological ability to generate completely fabricated images, videos, websites, and news articles. AI does.

Ever since ChatGPT burst onto the scenes in late 2022, people have been using it to generate content for websites, social media posts, and articles of all kinds. People have also been using AI image-generation tools to create an unending flood of photos, videos, and artwork, which now abound on mainstream social media platforms like Instagram, Facebook, Twitter, YouTube, and TikTok.

In recent months, this flood of AI-generated content has gotten especially bad on TikTok. Sometimes every second or third video I see in the app is AI-generated at every level: from the script to the narration to the accompanying images. And man, don’t get me started on Facebook…”

In the past I wrote about an idea I had wherein AI would generate likes and even comments on your posts to give you at least a little bit of engagement. That way you wouldn’t be discouraged when you produce content (like this very article) that no one is ever going to read. At least if you had some feedback you could get the ball rolling and then maybe the real comments would come in.

But if the content itself is also being AI generated then that could be a problem as people won’t be able to tell what’s real and what’s not and advertisers won’t be able to tell the difference either. Leading to a world where we consume content created by AI instead of real people and talk to AI chatbot versions of real people instead of the people themselves. AI run amok. Coating the Internet in AI slime that ruins everything.

Part of what makes the Internet great though is that it’s a random collection of everything that humanity has to offer. There’s million of websites and blogs out there and lots of unique niches for every possible interest that you may have. Rule 34 may apply to porn but it’s also true of the Internet itself. If you can think of it, there’s porn of it. Or in this case if you can think of, there’s probably a website or blog dedicated to it. If everything is AI generated, done purposefully or not, we’d lose that unique charm that makes the Internet the Internet.

So, for now let’s hope that the Dead Internet Theory remains just that. A theory.

The Dead Internet Theory - Ralph Smart - YouTube

Is the Dead Internet Theory true?!

Read Full Post »

I’m currently in the market for a new phone but maybe I should wait and get the next generation of phones. One that is literally generative and specifically optimized for AI.

Fast Company breaks it down:

“In December, Counterpoint Technology Market Research issued a report on GenAI smartphones that described one of their main characteristics as being ‘a subset of AI smartphones that uses generative AI to create original content, rather than just providing preprogrammed responses or performing predefined tasks.’ (In April, it expanded on that definition.)

And in February, Gartner offered its own definition, which says one of the key differentiators of a GenAI phone versus a regular smartphone is that the GenAI phone is ‘capable of locally running a base or fine-tuned AI model that generates new derived versions of content, strategies, designs, and methods.’

Counterpoint’s and Gartner’s definitions differ a little (and are very wordy), but it’s safe to say that a GenAI phone can be considered a smartphone that has at least the four following characteristics:

  • Offers generative AI apps and tools, such as AI chatbots and AI image editing and generation apps.
  • These tools should be baked into the phone’s operating system wherever possible so they can be used seamlessly system-wide.
  • The phones should have CPUs—computer chips—designed specifically for handling complex AI tasks.
  • The phones should be powerful enough to run AI models natively on the device instead of needing to send data to the cloud for AI servers to process remotely.

If we use the four points above to define what a GenAI phone is, it becomes evident that as of May 2024, few smartphones can be considered true GenAI phones. That’s because most of the smartphones available today don’t have chips designed specifically to handle complex AI tasks.

And while many smartphones today can run, for example, the ChatGPT app, that doesn’t make them GenAI phones since when you use the ChatGPT app on your smartphone, your queries aren’t being processed locally on the device itself. Instead, whatever you type into the ChatGPT app is being sent off to OpenAI’s servers to be processed remotely. Likewise, just because you’ve downloaded a generative AI photo app doesn’t mean you have a GenAI phone since remote servers usually do the image generation.”

In the past new phones were deritative. Only changing slightly from version to version. Now they’ll be generative, capable of generating new content thanks to their all AI focus.

Is a GenAI phone the Greatest Idea Ever?

Read Full Post »

If you were going to break down the best innovations currently happening from A to Z you would probably do something like this: A for AI, B for Blockchain, C for CRISPR, D for Driverless Cars, etc. But soon we might start seeing some crossover between these ideas with AI and CRISPR joining forces. A true technological convergence.

Futurism explains:

“A team of researchers at a Berkeley-based startup called Profluent say they’ve used generative AI technologies to edit human DNA.

As the New York Times reports, the startup fed huge amounts of biological data into a large language model (LLM) to come up with new editors based on the groundbreaking gene-editing technique CRISPR, as detailed in a yet-to-be-peer-reviewed paper.

Their goal is to produce gene editors that are more efficient and capable than existing biological mechanisms that allow organisms to, for instance, ward off diseases and other pathogens.

Profluent also claims to have already used one of these AI-generated gene editors, dubbed OpenCRISPR-1, to edit human DNA. The company says it’s the ‘world’s first open-source, AI-generated gene editor” that was “designed from scratch with AI.’

‘Our success points to a future where AI precisely designs what is needed to create a range of bespoke cures for disease,’ said cofounder and CEO Ali Madani in a press release. ‘To spur innovation and democratization in gene editing, with the goal of pulling this future forward, we are open-sourcing the products of this initiative.’

While the company is open-sourcing its AI model, it’s keeping the AI tech itself a secret. For now, it’s more of a proof-of-concept and it’s still unclear if OpenCRISPR-1 will be able to match or outdo existing CRISPR models, as the NYT points out.

Besides, as experts told the newspaper, what’s really holding back the field of gene editing is the number of preclinical studies to show if these edits are safe and effective. Despite plenty of optimism, scientists are still concerned about the possible side effects of editing human DNA, including the potential of triggering — rather than fighting — cancer.

Unperturbed, Profluent VP of gene editing Peter Cameron is calling the company’s AI gene editor a ‘watershed moment and the beginning of what we hope will be an iterative process as we embark on this next generation of building genetic medicines.'”

This is likely only the start of using AI in healthcare. From discovering new drugs to editing our DNA, AI will play a key role in making us smarter and helping us to see connections that would have otherwise remained invisible to us.

Is AI gene editing the Greatest Idea Ever?

Read Full Post »

Our introduction to AI has so far mostly involved chatbots. But soon we could have AI agents that cross over into the real world and actually carry out tasks for us. And it could be a very big deal.

According to Vox:

“…you’d have what is called an ‘AI agent,’ or an AI that acts with independent agency to pursue its goals in the world.

AI agents have been called the ‘future of artificial intelligence‘ that will ‘reinvent the way we live and work,’ the ‘next frontier of AI.‘ OpenAI is reportedly working on developing such agents, as are many different well-funded startups.

They may sound even more sci-fi than everything else you’ve already heard about AI, but AI agents are not nonsense, and if effective, could fundamentally change how we work.”

At their conference earlier this week Google made mention of their future plans for AI agents. Mashable explains:

“According to Pichai, AI Agents are still in ‘early days,’ but their description shows what Google envisions what AI can do for users. 

Pichai described AI agents as ‘intelligent systems that show reasoning, planning, and memory’ and can ‘think multiple steps ahead’ to complete more compex tasks for users.

Shopping returns is a specific example used at Google I/O to give a real-world use case for AI agents. Pichai explained a scenario where a user wants to return a pair of shoes they purchased. AI agents will be able to search the user’s email inbox for the receipt, locate the order number from the email, fill out the return form on the store’s website, and schedule a pickup for the item to be returned.

Another scenario provided involves AI agents searching up local shops and services, like dog walkers and dry cleaners, for a user that just moved to a new city, so that the user has all of these locations and contacts at their disposal. A key feature mentioned here was that Gemini and Chrome would work together to complete these takes, showing how AI agents would be able to work across various software and platforms.”

So, while not actually crossing over into the real world these AI agents would be able to act independently, with agency, and carry out tasks for us across browsers and apps. Giving us actual personal assistants, not just chatbots to talk to.

Are AI Agents the Greatest Idea Ever?

Read Full Post »

In the future headphones and AirPods may do more than just play music and podcasts or help us drown out unwanted noises. They might also become a jumping off point for personalized AI assistants who utilize attached cameras to see the world while we hear it. Thanks to Meta’s plans for Camerabuds.

As Tech Radar puts it:

“Meta seemingly won’t rest until your whole head is covered in its tech. It’s given us VR headsets like the Meta Quest 3 and smart specs like the Ray-Ban Meta smart glasses, teased Meta AR glasses, and now a leak suggests it’s looking into headphones (or earbuds) with AI features and cameras.”

Hypebeast adds that:

“Meta is reportedly developing an artificial intelligence-enabled set of earphones that have two outward facing cameras on the sides that will help to detect the wearer’s surroundings and provide real-time AI assistance. The working name for the project is ‘Camerabuds,’ a play on earbuds and cameras and will be a part of the company’s list of wearable developments.

At this time, it is unclear how long the project has been in development at Meta and if it will be moving forward. Zuck nor Meta have responded just yet but he has reportedly ‘seen several possible designs for the device’ but has not been ‘satisfied’ with the progress yet. Engineering concerns have been flagged internally, including battery life and heat as well as expected privacy issues with the camera-enabled piece. Furthermore, the obstruction of the camera from those with long hair could potentially affect the design.”

So, we’re still in the early stages of the concept and shouldn’t get too excited yet. Dealing with the hair issue is also a major potential roadblock and personally I feel like putting a camera on earbuds wouldn’t make much sense when you can just put them on glasses and see what the user is actually seeing. For my money Meta’s Ray-Ban Glasses are more likely to be a commercial success than camerabuds would be but it’s interesting to see Meta try to infuse AI into all of their potential products.

Hopefully, Mark Zuckerberg listens to the feedback he’s getting about camerabuds and takes that into consideration when deciding whether or not to move forward with this project.

Is camerabuds the Greatest Idea Ever?

Read Full Post »

It seems like every few months new social networks pop up and then disappear. Remember Artifact? Or Gas? Or Mastodon?

Well, now comes Maven. An AI driven anti-social network that gets rid of likes altogether in favor of organizing information based on your interests. Will it having staying power? Maybe. If you believe in the power of serendipity like Kate Beckinsale.

Wired explains:

“The platform eschews likes and follows in favor of letting pure chance play more of a role in what appears in users’ feeds. Maven’s lead investor is Twitter cofounder and former CEO Ev Williams, who also founded Medium. Other backers include OpenAI CEO Sam Altman.

Maven is built around a concept called open-endedness, pioneered by computer scientist and AI researcher Kenneth Stanley, one of the startup’s three cofounders. In most areas of computing, programmers write code or train an AI model to achieve specific objectives, such as driving a car without crashing or generating humanlike text. Stanley creates systems that instead evolve, seeking novelty for its own sake. These systems sometimes discover stepping stones toward solutions that couldn’t be achieved via a direct path of optimization. While working in Uber’s AI lab, he and collaborators used this approach to neural networks that could play Atari games and control a virtual robot better than previous systems.

In 2015, Stanley and a collaborator, Joel Lehman, published a book called Why Greatness Cannot Be Planned that applies the philosophy to life outside the lab, encouraging people to seek serendipity in their everyday lives. It gained a devoted following, and for years readers have been telling Stanley how optimizing for objectives like grades or salaries or grants, which can disincentivize exploration, has marred their lives.

Maven sprang from Stanley’s concluding that the best way to create more serendipity in people’s lives was through a social network—and then by chance crossing paths with Williams. In the middle of 2022, he left OpenAI to get started. Stanley teamed up with Jimmy Secretan, a former grad student who had worked on open-endedness in Stanley’s lab at the University of Central Florida, and Blas Moros, a like-minded entrepreneur. They founded Maven together, with Stanley as CEO, Secretan as CTO, and Moros as COO, and formally announced it on Twitter in January. It’s available for Apple and Android devices, and also via the web.

Stanley argues that most social networks suffer from the weight of objectives, because of the way they incentivize likes, follows, and attention. It turns people into brands and creates flame wars. On Maven, you don’t have followers, so you don’t have to worry about what your followers want to hear from you, or how to gain more of them. If you have a question about, say, washing machines, Stanley says, you can just post it, no stress, and let the platform find an appropriate audience.

On Maven, users follow interests such as computers or consciousness, each of which has its own profile on the service. When someone posts something, algorithms automatically analyze the text and tag it with relevant interests so it shows up on those pages, which also show other users who follow that interest and a list of related interests.

The main feed in the Maven app shows posts from all the interests a person follows. The platform doesn’t simply show the most popular. Posts need to clear a certain bar for engagement—an example of what Stanley calls a ‘minimal-criterion mechanism’ that he says also explains part of biological evolution and increases diversity—and then their probability of appearing is based on how closely they match a user’s interests. The app also has a serendipity slider from ‘Only show my selected interests’ to ‘Show me everything.'”

The problem with starting a social network is one of scale. The more people use your service the more engaging it becomes and the more other people want to use it. Eventually, you reach a critical mass and you’re off to the races but It’ll be hard for Maven to get off the ground if everyone is still on Facebook, X, and LinkedIn. Same as it was for Mastodon and Artifact and Gas.

But there is promise behind the premise of Maven. Helping you to find relevant information and interesting articles about topics that you’re interested in, powered by like minded individuals, without the need for changing who you are in order to build a brand, cultivate an audience, or cater to an algorithm. A purer form of social media. Driven by serendipity not status. Interests not influencers.

You know what? I kind of like it.

Is Maven the Greatest Idea Ever?

Read Full Post »

I previously wrote about the future of search back in 2022 when first writing about TikTok. Noting how traditional search could be replaced by an AI that tracks what you do online and then serves up targeted results based on your clicks and movements. And now we could have another new version of search. One that spruces up traditional search with AI instead of replacing it with something different.

The Verge sums it up best:

“A year ago, Google said that it believed AI was the future of search. That future is apparently here: Google is starting to roll out “AI Overviews,” previously known as the Search Generative Experience, or SGE, to users in the US and soon around the world. Pretty soon, billions of Google users will see an AI-generated summary at the top of many of their search results. And that’s only the beginning of how AI is changing search.

‘What we see with generative AI is that Google can do more of the searching for you,’ says Liz Reid, Google’s newly installed head of Search, who has been working on all parts of AI search for the last few years. ‘It can take a bunch of the hard work out of searching, so you can focus on the parts you want to do to get things done, or on the parts of exploring that you find exciting.’

Reid ticks off a list of features aimed at making that happen, all of which Google announced publicly on Tuesday at its I/O developer conference. There are the AI Overviews, of course, which are meant to give you a general sense of the answer to your query along with links to resources for more information. There’s also a new feature in Lens that lets you search by capturing a video. There’s a new planning tool designed to automatically generate a trip itinerary or a meal plan based on a single query. There’s a new AI-powered way to organize the results page itself so that when you want to see restaurants in a new city, it might offer you a bunch for date night and a bunch for a business meeting without you even having to ask. 

This is nothing short of a full-stack AI-ification of search. Google is using its Gemini AI to figure out what you’re asking about, whether you’re typing, speaking, taking a picture, or shooting a video.

It’s using a new specialized Gemini model to summarize the web and show you an answer. It’s even using Gemini to design and populate the results page. 

Not every search needs this much AI, though, Reid says, and not every search will get it. ‘If you just want to navigate to a URL, you search for Walmart and you want to get to walmart.com. It’s not really beneficial to add AI.’ Where she figures Gemini can be most helpful is in more complex situations, the sort of things you’d either need to do a bunch of searches for or never even go to Google for in the first place.”

Google’s developer conference made headlines for some of its weird and wacky moments, for becoming the real life version of Hooli from Silicon Valley when they unleashed the Kracken in the form of an unhinged DJ (not Erlich).

But the work they did to improve search should be taken seriously. We’ll just have to wait and see if people like using it and if it’s enough to hold off OpenAI, TikTok and any other existential threat to search that arises.

Has the future of search arrived?

Read Full Post »

Google Glass, an invention that may have been ahead of its time, was shunned when it first came out over intrusive privacy concerns. But could one of my favorite ideas of all-time be on the verge of making a comeback?!

Well, it seems that Google’s latest AI advancements, in the form of Project Astra, as part of their push to create AI agents, could be paving the way for eventual integration into consumer products, perhaps even an integration with AR glasses like Google Glass.

Ars Technica sums it up best:

“Just one day after OpenAI revealed GPT-4o, which it bills as being able to understand what’s taking place in a video feed and converse about it, Google announced Project Astra, a research prototype that features similar video comprehension capabilities. It was announced by Google DeepMind CEO Demis Hassabis on Tuesday at the Google I/O conference keynote in Mountain View, California.

Hassabis called Astra ‘a universal agent helpful in everyday life.’ During a demonstration, the research model showcased its capabilities by identifying sound-producing objects, providing creative alliterations, explaining code on a monitor, and locating misplaced items. The AI assistant also exhibited its potential in wearable devices, such as smart glasses, where it could analyze diagrams, suggest improvements, and generate witty responses to visual prompts.

Google says that Astra uses the camera and microphone on a user’s device to provide assistance in everyday life. By continuously processing and encoding video frames and speech input, Astra creates a timeline of events and caches the information for quick recall. The company says that this enables the AI to identify objects, answer questions, and remember things it has seen that are no longer in the camera’s frame.

While Project Astra remains an early-stage feature with no specific launch plans, Google has hinted that some of these capabilities may be integrated into products like the Gemini app later this year (in a feature called ‘Gemini Live’), marking a significant step forward in the development of helpful AI assistants. It’s a stab at creating an agent with ‘agency’ that can ‘think ahead, reason and plan on your behalf,’ in the words of Google CEO Sundar Pichai.”

But does this mean that Google Glass is actually coming back?! As Google co-founder Sergey Brin puts it:

“While the rise and fall of Google Glass is a story in and of itself, the vision of the product has made a dramatic comeback in recent years thanks to AI tech and the pursuit of finding the best hardware to suit it — whether that be glasseslapel pins, or orange squares. Brin’s preference? Something that’s hands-free, wearable, and not a phone.”

Glasses perhaps?!

Personally, I believe that AR Glasses are our future. Not phones or other wearables like pins. Whether that’s Google Glass or Meta’s Ray-Bans or the Apple Vision Pro remains to be seen. But it’s clear that we’re going to want something hands free, that’s part of who we are, that can see what we see.

Is Google Glass poised for a comeback thanks to Project Astra?

Read Full Post »

#3,149 – AI Week: Veo

A day after OpenAI made headlines by bringing the plot of the movie Her to life it was Google’s turn to share their latest breakthroughs and demo their newest products. Chief among them is Veo, a new way to create realistic minute long videos from prompts just like OpenAI’s Sora.

The Verge explains:

“It’s been three months since OpenAI demoed its captivating text-to-video AI, Sora, and now Google is trying to steal some of that spotlight. Announced during its I/O developer conference on Tuesday, Google says Veo — its latest generative AI video model — can generate ‘high-quality’ 1080p resolution videos over a minute in length in a wide variety of visual and cinematic styles.

Veo has ‘an advanced understanding of natural language,’ according to Google’s press release, enabling the model to understand cinematic terms like ‘timelapse’ or ‘aerial shots of a landscape.’ Users can direct their desired output using text, image, or video-based prompts, and Google says the resulting videos are ‘more consistent and coherent,’ depicting more realistic movement for people, animals, and objects throughout shots.

Google DeepMind CEO Demis Hassabis said in a press preview on Monday that video results can be refined using additional prompts and that Google is exploring additional features to enable Veo to produce storyboards and longer scenes.”

An AI-generated turtle swimming past a coral reef.

Even though Generative Art came first, I believe that Generative Video will ultimately be the more impactful tool. Democratizing filmmaking and making everyone into an amateur auteur. I have yet to experience these tools first hand (Veo is not yet generally available) but I can’t wait to eventually have a crack at them, unleashing my boundless creativity in waves and waves. If we eventually get the point where the technology can support longer feature length films instead of just one minute clips then watch out. There’s no telling what we might be capable of.

Is Veo the Greatest Idea Ever?

Read Full Post »

Yesterday the hype was all about Her. But it really should be about them. Our children. For the future of education may have just arrived thanks to GPT-4o. Thanks to its ability to see your screen and understand audio, video and text as well as conversate naturally in real-time it can now become your teacher or at least your tutor. Completely revolutionizing education in the process.

This really could wind up democratizing knowledge. Now it won’t matter if you go to a fancy private school or not. Or if your parents can afford a tutor or not. If you are having trouble understanding something, you now have someone who can walk through the problem with you and show you how to do it. Show you where you’re going wrong. What you might have missed.

It’s a better version of watching a YouTube video to learn something. It’s a video you can talk to and interact with. That can understand you and what you’re looking at and point you in the right direction or help you deduce the right answer.

People are obviously going to worry that this will be bad for education, that it will just lead to students taking the easy way out, using AI to cheat or do their homework for them. But according to MIT Technology Review those early fears may have been overblow and that was before this latest update:

“This initial panic from the education sector was understandable. ChatGPT, available to the public via a web app, can answer questions and generate slick, well-structured blocks of text several thousand words long on almost any topic it is asked about, from string theory to Shakespeare. Each essay it produces is unique, even when it is given the same prompt again, and its authorship is (practically) impossible to spot. It looked as if ChatGPT would undermine the way we test what students have learned, a cornerstone of education.

But three months on, the outlook is a lot less bleak. I spoke to a number of teachers and other educators who are now reevaluating what chatbots like ChatGPT mean for how we teach our kids. Far from being just a dream machine for cheaters, many teachers now believe, ChatGPT could actually help make education better.

Advanced chatbots could be used as powerful classroom aids that make lessons more interactive, teach students media literacy, generate personalized lesson plans, save teachers time on admin, and more.”

Khan Academy found Sal Khan, who kicked off the online education revolution, has stressed how critically important AI tutors could become. If the technology now exists, there’s no reason every child in the world shouldn’t have their very own AI tutor.

The Future of Education could feature AI Tutors.

Read Full Post »

Older Posts »