GitHub Copilot shifts to usage-based pricing June 1 - but there's good news
Under the new approach, if you run out of credits, you can't use the service. GitHub plans to preview the new billing in early May.
Stay informed on AI governance, compliance, and regulation news. Curated updates on AI ethics, policy, and enforcement from trusted sources. Updated .
Monitoring 7147+ articles from 21+ trusted sources including MIT Technology Review, TechCrunch, The Verge, and AI News in 2026.
Randy New is the founder and editor of AI Governance Watch. He is a FinTech executive with over 30 years of experience in infrastructure, cybersecurity, M&A integration, and regulatory compliance. Randy specializes in cybersecurity intelligence and AI governance.
Randy also publishes Cyber Security Wire and Human vs AI. Learn more about AI Governance Watch and its mission.
AI Governance Watch is a curated news platform that aggregates AI governance, compliance, and regulation news from over 21 trusted sources. It helps professionals track AI policy developments worldwide.
Sources include MIT Technology Review, TechCrunch, The Verge, and specialized AI policy publications. As of 2026, the platform has aggregated 7147+ articles across six categories.
Articles are automatically categorized into six areas: regulation, policy, ethics, compliance, enforcement, and general AI news. Each category focuses on a specific aspect of AI governance.
Recently curated articles on AI regulation, policy, and compliance:
Under the new approach, if you run out of credits, you can't use the service. GitHub plans to preview the new billing in early May.
The CineBeam Q is a high-quality portable projector, and with this deal and a free LG S40T soundbar, you can make it a permanent addition to your home theater.
Get a $200 prepaid Mastercard when you bring your device and old number to T-Mobile. We break down the deal.
<h4>They were doing it in Texas...</h4> <p>Core Scientific is trading coins for tokens, revealing plans on Monday to convert a 300-megawatt bitcoin mining operation in Pecos, Texas, to an 1.5 gigawatt AI datacenter campus.…</p>
Google CEO Sundar Pichai arrives for the inauguration of President Donald Trump. | Getty Images Over 600 Google employees signed a letter to CEO Sundar Pichai demanding that Google block the Pentagon from using its AI models for classified purposes, reports the The Washington Post. Its organizers claim many of the signers work in Google's DeepMind AI lab, and include more than 20 principals, directors, and vice presidents. According to the Post, the letter says that "The only way to guarantee
The Bluetti Elite 400 has something that you don't see on many power stations, and it can charge just about anything.
The social media giant’s new project with Overview Energy is slated to reach commercialization by 2030.
OpenAI has won major concessions from its largest shareholder, Microsoft, that will allow it to sell products on AWS, while Microsoft get more cash in a revenue-share agreement.
Ineffable Intelligence, a British AI lab founded a mere few months ago by former DeepMind researcher David Silver, has raised $1.1 billion in funding at a valuation of $5.1 billion.
<h4>Eish shame man! Maybe you shouldn't ask AI to set the rules for AI use?</h4> <p>South Africa has pulled its draft national AI policy after discovering that it was citing sources that exist only in the fertile imagination of a chatbot.…</p>
This is a pretty big perk for Galaxy users since there's nothing like it built into Android.
<h4>Executives from Citi, Home Depot, and Capcom describe early work with AI agents</h4> <p>While AI agents have moved from experimental tools to customer-facing workers in a matter of months, the next challenge is governance and reliability once those agents touch real money, real shoppers, and real creative output.…</p>
OpenAI and Microsoft's partnership-turned-situationship just got even less committed. And a clause about artificial general intelligence, which has for years dictated the future of their deal, has officially been dropped. On Monday morning, Microsoft announced a handful of big changes to its long-standing OpenAI deal. Microsoft will remain OpenAI's "primary cloud partner, and OpenAI products will ship first on Azure, unless Microsoft cannot and chooses not to support the necessary capabilities."
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. In February, I picked up a flyer at an anti-AI march in London. I can’t say for sure whether or not its writers meant to riff on South Park’s underpants gnomes. But…
The data center industry’s rapid growth has raised concerns among state lawmakers, who are now pushing to limit AI data centers’ impact on utility bills, water use and the electric grid.
Skye's new AI app attracted investors before it even launched — a sign of interest in a more AI-aware iPhone.
Sam Altman and Elon Musk are set to face off in a high-stakes trial that could alter the future of tech’s leading AI startup, OpenAI. The trial begins with jury selection on April 27th, as Musk pushes forward his 2024 lawsuit that accuses OpenAI of abandoning its founding mission of developing AI to benefit humanity and shifting focus to boosting profits instead. Musk was a cofounder of OpenAI and claims that Altman and co-founder Greg Brockman tricked him into giving the company money, only to
<h4>Facebook provider also working with energy storage firm to keep 100 hours of juice on hand</h4> <p>With AI demand growing, Facebook parent Meta is looking for new ways to power its datacenters, with one ambitious project pledging to send solar power down from orbit. Another agreement offers Meta the opportunity to store enough power to keep its bit barns going, even when the grid is over capacity or down.…</p> <p><!--#include virtual='/data_centre/_whitepaper_textlinks_top.html' --></p>
<h4>No. More. Exclusivity. Redmond keeps the ring until 2032, but OpenAI is free to see other clouds</h4> <p>Once tied tightly together, Microsoft and OpenAI have amended their agreement, making the Windows giant's license non-exclusive. In exchange, Microsoft will no longer owe OpenAI a revenue share.…</p>
Out of the box, MacOS is pretty user-friendly and flexible. Still, I make these tweaks right away to make it even better.
AI governance is the set of rules, policies, and frameworks that ensure artificial intelligence is developed and used responsibly. It covers ethical guidelines, compliance standards, and oversight mechanisms to keep AI safe, fair, and accountable.
The EU AI Act requires businesses to classify their AI systems by risk level and meet specific obligations. High-risk systems need conformity assessments, technical documentation, and human oversight. Non-compliance can result in fines up to €35 million or 7% of global turnover.
The NIST AI RMF is a voluntary U.S. framework that helps organizations identify, assess, and mitigate AI-related risks. It is built around four core functions: Govern, Map, Measure, and Manage.
AI compliance is critical because governments worldwide are actively enforcing AI regulations. The EU AI Act carries heavy fines, the U.S. has expanded federal AI oversight, and countries like Canada, Brazil, and China have enacted AI-specific laws. Non-compliance risks penalties, reputational harm, and operational disruption.
The key AI ethics principles are fairness, transparency, accountability, privacy, safety, human oversight, and inclusiveness. These principles are reflected in major frameworks including the OECD AI Principles and the EU Ethics Guidelines for Trustworthy AI.
Organizations implement AI risk management by creating governance structures, running impact assessments, testing for bias, monitoring model performance, and documenting decisions. The NIST AI RMF and ISO/IEC 42001 provide standardized approaches for this process.
Major AI regulations include the EU AI Act, U.S. Executive Orders on AI Safety, Canada's AIDA, South Korea's AI Basic Act, China's Generative AI rules, Brazil's AI framework, and Japan's AI guidelines. Over 60 countries have enacted or proposed AI-specific regulations.
An AI impact assessment is a structured evaluation of how an AI system may affect individuals and society. It examines risks such as bias, privacy violations, and safety concerns. The EU AI Act requires mandatory impact assessments for all high-risk AI systems.
ISO/IEC 42001 is the international standard for AI management systems. It provides a certification framework that helps organizations establish, implement, and improve their AI governance practices in a structured and auditable way.
The AI Bill of Rights is a White House blueprint outlining five principles to protect Americans from AI harms: safe and effective systems, freedom from algorithmic discrimination, data privacy, notice and explanation, and human alternatives and fallback options.
AI Governance Watch aggregates news from over 21 trusted sources including MIT Technology Review, TechCrunch, and The Verge. Articles are automatically categorized into topics like regulation, policy, ethics, compliance, and enforcement to help professionals track AI governance developments.
Algorithmic bias occurs when an AI system produces systematically unfair outcomes due to flawed data or design assumptions. It can lead to discrimination based on race, gender, or other protected characteristics. Detecting and mitigating bias is a core requirement of most AI governance frameworks.
The key AI governance frameworks are the EU AI Act, NIST AI RMF, OECD AI Principles, ISO/IEC 42001, the AI Bill of Rights, and Canada's AIDA. These frameworks set rules for AI risk management, compliance, and ethical use.
| Framework | Region | Status | Focus |
|---|---|---|---|
| EU AI Act | European Union | In Force | Risk-based AI regulation with tiered requirements |
| NIST AI RMF | United States | Active | Voluntary risk management framework (Govern, Map, Measure, Manage) |
| OECD AI Principles | International | Active | International guidelines for trustworthy AI |
| ISO/IEC 42001 | International | Published | AI management system certification standard |
| AI Bill of Rights | United States | Published | Blueprint for protecting civil rights in AI era |
| Canada AIDA | Canada | In Progress | Artificial Intelligence and Data Act |
According to Stanford HAI's AI Index Report, over 60 countries have enacted or proposed AI-specific regulations as of 2026. The trend is toward mandatory compliance requirements rather than voluntary guidelines.
AI Governance Watch was founded by Randy New, a FinTech executive with over 30 years of leadership in infrastructure, cybersecurity, M&A integration, and regulatory compliance. Randy operates at the intersection of financial technology and emerging risk disciplines, with a particular focus on cybersecurity intelligence and AI governance.
Randy New also publishes Cyber Security Wire (cybersecurities.pro) and Human vs AI (humanvsai.tech). AI Governance Watch curates and aggregates AI governance news from authoritative sources including MIT Technology Review, TechCrunch, The Verge, and specialized AI policy publications.
For more information, visit our contact page or subscribe to our newsletter for daily or weekly updates.
"AI technologies can provide substantial benefits, but also pose risks. A responsible approach to AI requires both innovation and guardrails."
"AI actors should respect the rule of law, human rights, democratic values, and diversity, and should implement appropriate safeguards to ensure a fair and just society."
"Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public."
"Artificial intelligence should be a tool for people and be a force for good in society, with the ultimate aim of increasing human well-being."
"The number of AI-related regulations has increased sharply in recent years. In 2023 alone, there were 25 AI-related regulations enacted in the U.S., a significant increase from just one in 2016."
"AI systems must not be used for social scoring or mass surveillance purposes. Member States should ensure that AI systems do not undermine human dignity."