News
- Google execs have been talking hardware following the Pixel 10 launch
- Flip phones, smart rings, and tablets aren't on the way
- The company is concentrating on phones and AI instead
We've just been treated to a host of new Google Pixel devices, including four different Pixel 10 phones, but we also have news about Google devices that aren't coming – including a flip foldable and a successor to the Pixel Tablet from 2023.
Speaking to Mark Gurman and Samantha Kelly at Bloomberg, Google's Vice President of Devices and Services Shakil Barkat confirmed that there are no plans for a Google flip foldable to join the Pixel 10 Pro Fold.
Barkat also ruled out a smart ring, and says the Pixel tablet series is on pause until a "meaningful future" can be figured out for the product category. It seems the likes of Samsung will be left to release those kinds of devices for the time being.
The status on smart glasses, meanwhile, is "TBD" – it seems Google is happy to stay focused, for now. "Every time a new type of category of product gets added, the bar on maintenance for the end user keeps going up," says Barkat. "It's already pretty painful."
The "vanguard" of AIGoogle is focused on Pixel phones and AI (Image credit: Philip Berne / Future)Google execs did also use the interview to hype up what they are working on. Rick Osterloh, who is head of Google's hardware and Android divisions, described the Pixel 10 as a "super strong release" in what is now a "mature category".
The Pixel 11 is almost finalized, apparently, while work has started on the Pixel 12. Google design chief Ivy Ross says that the company is aiming for big visual changes to the Pixel phones "every two to three years" – so watch this space.
As you would expect, the Google team pushed AI as being the big innovation that'll be happening on phones over the next few years, via Gemini and features such as Magic Cue, which surfaces key info from your phone when you need it.
Osterloh says he wants Android to be "on the vanguard of where AI is going", and that Google isn't overly worried about Pixel sales: the phones account for about 3% of the US market at the moment, compared to a 49% share for Apple.
You might also like- Trump criticizes legacy US Government websites as expensive and poor to use
- New "American by Design" initiative will modernize government agency sites
- Airbnb co-founder Joe Gebbia appointed as Chief Design Officer of National Design Studio
President Trump may soon be browsing for the best website builders after ordering improvements to federal government websites and physical spaces in the hope to make them more attractive for both workers and customers.
“The Government has lagged behind in usability and aesthetics,” Trump said in a new Executive Order, noting the need for system modernization that could tackle high maintenance costs in the process.
The Executive Order explains legacy systems can be costly to maintain and costly to American citizens, who can spend more time than necessary trying to navigate them, hence the need for change.
Trump wants to modernize US Government websitesThe Order introduces Trump’s new ‘America by Design’ initiative, which begins with high-touch point sites where citizens are most likely to interact with government agencies.
The formation of a new National Design Studio and the appointment of a Chief Design Officer will oversee the project.
“It is the policy of my Administration to deliver digital and physical experiences that are both beautiful and efficient, improving the quality of life for our Nation,” Trump wrote.
The National Design Studio has been tasked with reducing duplicative design costs, much in the same way that the White House has already started centralizing IT procurement to boost cost efficiency.
It will also use a standardized design for consistency and trust, and improve the quality of public-facing experiences.
Agencies have been given until July 4, 2026 to deliver their initial results after consulting with the Chief Design Officer.
Separate Reuters reporting has revealed Airbnb co-founder Joe Gebbia will lead the National Design Studio as Chief Design Officer, with the Internal Revenue Service set to be the first place to see an overhaul.
Trump’s Order also confirms the “temporary organization” will close in three years, on August 21, 2028, suggesting that site modernization could be complete even before that.
You might also like- We’ve listed the best web hosting services
- On a budget? Get yourself online with the best free website builders around
- Trump's "One Big Beautiful Bill" set to award $1 billion funding to "offensive cyber operations"
As artificial intelligence (AI) tools like ChatGPT, Co-Pilot, Grok and predictive analytics platforms become embedded in everyday business operations, many companies are unknowingly walking a legal tightrope.
While the potential of AI tools provide many benefits - streamlining workflows, enhancing decision-making, and unlocking new efficiencies - the legal implications are vast, complex, and often misunderstood.
From data scraping to automated decision-making, the deployment of AI systems raises serious questions around copyright, data protection, and regulatory compliance.
Without robust internal frameworks and a clear understanding of the legal landscape, businesses risk breaching key laws and exposing themselves to reputational and financial harm.
GDPR and the Use of AI on Employee DataOne of the most pressing concerns is how AI is being used internally, particularly when it comes to processing employee data. Many organizations are turning to AI to support HR functions, monitor productivity, or even assess performance. However, these applications may be in direct conflict with the UK General Data Protection Regulation (GDPR).
GDPR principles such as fairness, transparency, and purpose limitation are often overlooked in the rush to adopt new technologies. For example, if an AI system is used for employee monitoring without their informed consent, or if the data collected is repurposed beyond its original intent, the business could be in breach of data protection law.
Moreover, automated decision-making that significantly affects individuals, such as hiring or disciplinary actions, requires specific safeguards under GDPR, including the right to human intervention.
The Legal Grey Area of Data ScrapingAnother legal minefield is the use of scraped data to train AI models. While publicly available data may seem fair game, the reality is far more nuanced. Many websites explicitly prohibit scraping in their terms of service, and using such data without permission can lead to claims of breach of contract or even copyright infringement.
This issue is particularly relevant for businesses developing or fine-tuning their own AI models. If training data includes copyrighted material or personal information obtained without consent, the resulting model could be tainted from a legal standpoint. Even if the data was scraped by a third-party vendor, the business using the model could still be held liable.
Copyright Risks in Generative AIGenerative AI tools, such as large language models and image generators, present another set of challenges. Employees may use these tools to draft reports, create marketing content, or process third-party materials. However, if the input or output involves copyrighted content, and there are no proper permissions or frameworks in place, the business could be at risk of infringement.
For instance, using generative AI to summarize or repurpose a copyrighted article without a license could violate copyright law. Similarly, sharing AI-generated content that closely resembles protected work may also raise legal red flags. Businesses must ensure their employees understand these limitations and are trained to use AI tools within the bounds of copyright law.
The Danger of AI “Hallucinations”One of the lesser-known but increasingly problematic risks of AI is the phenomenon of “hallucinations”- where AI systems generate outputs that are factually incorrect or misleading, but presented with confidence. In a business context, this can have serious consequences.
Consider a scenario where an AI tool is used to draft a public document or legal summary, in which it includes fabricated company information or incorrect regulations. If that content is published or relied upon, the business could face reputational damage, client dissatisfaction, or even legal liability. The risk is compounded when employees assume the AI’s output is accurate without proper verification.
The Need for Internal AI GovernanceTo mitigate these risks, businesses must act promptly to implement robust internal governance frameworks. This includes clear policies on how AI tools can be used, mandatory training for employees, and regular audits of AI-generated content.
Data Protection Impact Assessments (DPIAs) should be conducted whenever AI is used to process personal data, and ethical design principles should be embedded into any AI development process.
It’s also critical to establish boundaries around the use of proprietary or sensitive information. Employees interacting with large language models must be made aware that anything they input could potentially be stored or used to train future models. Without proper safeguards, there’s a real risk of inadvertently disclosing trade secrets or confidential data.
Regulatory Focus in 2025Regulators are increasingly turning their attention to AI. In the UK, the Information Commissioner’s Office (ICO) has made it clear that AI systems must comply with existing data protection laws, and it is actively investigating cases where this may not be happening. The ICO is particularly focused on transparency, accountability, and the rights of individuals affected by automated decision-making.
Looking ahead, we can expect more guidance and enforcement around the use of AI in business. The UK is currently consulting on its AI Bill which aims to regulate artificial intelligence by establishing an AI Authority, enforcing ethical standards, ensuring transparency, and promoting safe, fair, and accountable AI development and use that businesses must comply with.
AI is transforming the way we work, but it’s not a free pass to bypass legal and ethical standards. Businesses must approach AI adoption with caution, clarity, and compliance to safeguard their staff and reputation. By investing in governance, training, and legal oversight, organizations can harness the power of AI while avoiding the pitfalls.
The legal risks are real, but with the right approach, they are also manageable.
We feature the best cloud document storage.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro