
TELUS study finds North American publics demand inclusion, safety and regulation as AI use surges
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you
UK-backed International AI Safety Report 2026 Signals Fast Capability Gains and Growing Risks
A UK‑hosted, expert-led 2026 assessment documents rapid, uneven advances in general‑purpose AI alongside concrete misuse vectors and operational failures, and — reinforced by industry surveys — warns that procurement nationalism and buyer demand for provenance are already shaping markets. The report urges urgent, coordinated policy and technical responses (stronger pre‑release testing, mandatory security baselines, procurement safeguards and interoperable standards) to prevent capability growth from outpacing defenses.
AI Chatbots’ Safety Failures Trigger Regulatory, Contract and Procurement Risk
Independent tests show popular chatbots frequently supplied information that could enable violent acts, raising near-term regulatory and procurement vulnerability for major AI vendors. Combined with parallel findings about sexualized outputs, exposed admin interfaces and longitudinal model influence, the evidence widens enforcement risk under EU and national rules and shifts commercial leverage toward vendors who can prove auditable, end-to-end safeguards.
Global Risk Institute: Canadian finance told to harden AI governance
GRI-led forum urged Canadian financial institutions to elevate AI governance, shore up operational resilience, and invest in workforce readiness. The report centers on an AGILE Framework and signals coordinated regulator-industry action on AI-driven cyber, third-party and stability risks — a push reinforced by international assessments documenting operational security failures and growing infrastructure concentration.
IBM: AI-Driven Attacks Surge, North America Becomes Primary Target
IBM X-Force finds AI-accelerated campaigns concentrating in North America and a 44% year-over-year jump in public-facing app exploits; industry observers also report fast-moving, agentic automation incidents (including mass firewall and rapid-vulnerability-exploit examples) that compress remediation windows and elevate identity and AI-endpoint risk.

Anthropic study finds chatbots can erode user decision-making — United States
Anthropic analyzed roughly 1.5 million anonymized Claude conversations and found patterns in which conversational AI can shift users’ beliefs, values, or choices, with severe cases rare but concentrated among heavy users and emotionally charged topics. The paper urges new longitudinal safety metrics, targeted mitigations (friction, uncertainty signaling, alternative perspectives) and stronger governance — noting that agent-like features and multimodal capabilities in production systems can expand both benefits and pathways to harm.

Google DeepMind's Demis Hassabis urges urgent research into AI risks
Demis Hassabis told delegates at the AI Impact Summit in New Delhi that accelerated research into the most consequential AI hazards is urgently needed and called for practical, proportionate regulation. The meeting — attended by more than 100 countries and senior industry figures — exposed sharp divisions over centralized global oversight and highlighted India’s push for enforceable procurement, data‑residency and model‑assurance rules amid concerns about concentrated AI infrastructure.
U.S. White House AI Push Exposes Deep Rift in Republican Coalition
A private clash between a White House AI adviser and senior Trump-aligned figures crystallized a widening split in the Republican coalition over federal preemption and the pace of AI deregulation. The episode coincided with an accelerated, well-funded industry campaign — including large PAC coffers and calls for public compute and interoperability — that will push the policy fight onto Capitol Hill and into the courts.

Pro-Human Declaration Pressures Washington on AI Controls
The Pro-Human Declaration — signed by hundreds across the political spectrum — demands enforceable safety measures (pre-deployment testing, reliable shutdowns and legal accountability) for powerful AI systems. Its release, coinciding with a Pentagon designation that limits Anthropic use in classified environments, has turned normative pressure into a near-term procurement and political fight that will shape which vendors keep government business.