Select Language:
Imagine conversing with your Echo device, and it responds with a well-rounded memory of your schedule and inbox. It assists in planning and shopping for upcoming celebrations, checks connected Ring cameras for delivered packages, and adjusts the thermostat when you mention it’s unbearably hot—all without needing to repeatedly say the “Alexa” wake word. This progression transformed the classic Alexa into an advanced Alexa+ assistant, incorporating a form of AI similar to ChatGPT. Essentially, Amazon has integrated a “generative AI” system into it. Take a look:
[Insert YouTube video]
I desire this level of convenience in my daily routine, but I remain cautious about the expenses involved.
### AI’s Growing Appetite
So, how can we improve AI assistants? By providing them with more data—texts, images, sounds, videos, and everything they can learn from. Gathering such data can be challenging, given the limited legal access to certain materials.
Otherwise, as recently discovered by Amazon-backed Anthropic, using user interactions for AI training can lead to hefty settlements—like the $1.8 billion paid in a lawsuit involving book authors. Numerous copyright disputes threaten big AI corporations, so the next best thing is to turn users into contributors willingly.
[Insert image of Alexa+ assistant]
Amazon’s data streams are unmatched in volume and significance, thanks to millions of Echo devices in homes worldwide. While Google and Meta are competitors, they don’t have the same level of personal integration—like the speaker in your room or the Fire TV in your living area.
Amazon depends heavily on this data to make Alexa+ truly useful. However, recent policy changes provide some concerns: starting March 28, all voice recordings must be sent to Amazon’s cloud servers for processing, or users risk losing features like Voice ID. Amazon explained:
“As we expand Alexa’s capabilities with generative AI features that rely on secure cloud processing, support for local processing will be discontinued.”
### The Power Dilemma
Generative AI chatbots—think of Gemini, ChatGPT, and Alexa+—are notoriously power-hungry. They require advanced chips to process locally, and only a few devices, like certain smartphones, can handle on-device AI, while some specialized PCs have been built for such tasks. It’s unlikely Amazon will embed this high-performance hardware into small, affordable speakers or displays, which makes local processing infeasible at scale.
Consequently, Amazon and others must send user commands to robust servers online. This raises questions: Will Amazon simply process your voice and text inputs, or will it also retain this data for AI training purposes?
[Insert image of Alexa+ on Echo display]
Amazon’s history adds layers to this concern. Amazon recently revealed that user interactions with AI are stored for up to five years, and the company has previously been fined for privacy breaches—such as keeping children’s interactions with Alexa and listening to voice reviews. There are also cases where recordings from Echo devices have been used in legal proceedings, and the FTC has scrutinized Amazon for privacy violations related to video recordings of private spaces.
[Insert image of Alexa+ monitoring video]
Handling this information responsibly becomes a serious issue. For Alexa+, to offer a truly personalized experience, the system needs to remember past conversations and sensitive data like calendars, emails, and shopping lists. This feature, while promising, poses significant privacy red flags.
### A Compromising Balance
Amazon appears to be grappling with the implementation of Alexa+. Early tests show that the memory functions often malfunction, even for simple tasks like saving a frequent flyer number. Demonstrations highlighted its ability to remember locations within your house or other scenery captured through connected cameras—yet this involves transmitting video feeds to the cloud and using more expensive hardware. To access Alexa+, now, users typically pay a monthly fee unless they’re Prime members.
[Insert image of Alexa+ watching video]
The hardware challenge complicates matters further, especially since many users opt for low-cost Echo speakers—some as inexpensive as $50—that can’t support the advanced processing needed for Alexa+. This means Alexa+ will likely stay reliant on cloud processing for the foreseeable future. Transparency around data privacy and usage remains vital, considering how seamlessly Alexa+ connects with third-party services like Uber or Grubhub and allows sharing of documents, emails, and photos.
Ultimately, the more you want Alexa+ to do, the more data it needs from you. The choice to adopt or avoid the system is yours, but the pressure is on Amazon to demonstrate it can prioritize privacy and trust amidst the AI expansion. Skepticism remains, but some hope persists that corporate responsibility will prevail.