The-AI-Bill-of-Rights-all-you-need-to-know

The AI Bill of Rights – all you need to know

In what could be termed yet another initiative to promote the use of responsible and trustworthy artificial intelligence (AI), the Biden administration has unveiled a new “AI Bill of Rights” which identifies some key principles to protect the rights of the American public in the age of AI.

The document, while acknowledging the great transformative power of artificial intelligence, provides a blueprint to avert potential harms caused by unaccountable algorithms and to protect civil rights, civil liberties, and privacy.

On Oct. 4 the White House’s Office of Science and Technology Policy (OSTP) released a set of voluntary guidelines, the Blueprint for an “AI Bill of Rights”, for governments at all levels to companies of all sizes. It identifies five principles to guide the design, use, and deployment of automated systems to protect all people against potential harms.

“Where existing law or policy—such as sector-specific privacy laws and oversight requirements—do not already provide guidance, the Blueprint for an AI Bill of Rights should be used to inform policy decisions,” reads an explainer by the OSTP.

The principles

The five aspirational and interrelated, yet non-binding, principles are: 1). Safe and Effective Systems 2). Algorithmic Discrimination Protections 3). Data Privacy 4). Notice and Explanation and 5). Human Alternatives, Consideration, and Fallback.  

According to the first principle, individuals should be protected from unsafe or ineffective systems. The second principle is about discrimination individuals could face by algorithms and stresses that systems should be used and designed in an equitable way. As per the third principle, individuals should be protected from abusive data practices via built-in protections and they should have agency over how data about them is used. The fourth principle stresses that individuals should know that an automated system is being used and understand how and why it contributes to outcomes that impact them. The fifth principle says that individuals should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems they encounter.

The concerns

While the White House’s new initiative drew widespread media attention it wasn’t without criticism.

For example, the WIRED termed the “AI Bill of Rights” as toothless against big tech. The Wall Street Journal quoting some tech executives wrote that “the nonbinding guidelines could lead to stifling regulation concerning artificial intelligence.”

“The Blueprint for an AI Bill of Rights notably does not set out specific enforcement actions, but instead is intended as a White House call to action for the U.S.,” reads a news report by the Associated Press.

As per the document, the blueprint was developed through extensive consultation—with stakeholders from impacted communities to experts and practitioners as well as policymakers—on the issue of algorithmic and data-driven harms and potential remedies.

GAO report on challenges and benefits of ML technologies in medical diagnostics

GAO report on challenges and benefits of ML technologies in medical diagnostics

Government Accountability Office (GAO’s) latest report has identified several challenges affecting the development and adoption of machine learning (ML) technologies in medical diagnostics. The report also offers some policy options for addressing these challenges and enhancing benefits.

The 103-page report “Technology Assessment Artificial Intelligence in Health Care: Benefits and Challenges of Machine Learning Technologies for Medical Diagnostics” to congressional requesters has been jointly published by the GAO and the National Academy of Medicine (NAM).

Cost of diagnostic errors

Emphasizing the importance of effective, efficient, and accurate clinical diagnostic process, the GAO’s report quotes a report by the Society to Improve Diagnosis in Medicine according to which each year diagnostic errors affect more than 12 million Americans with aggregate costs likely in excess of $100 billion.

The report says that challenges to the development and use of ML medical diagnostic technologies raise technological, economic, and regulatory questions. “… AI tools developed using historical data could unintentionally perpetuate biases, reduce safety and effectiveness for different groups of patients, and produce disparities in treatment,” notes the GAO report.

The challenges

Some of the several challenges affecting the development and adoption of ML in medical diagnostics found by the GAO are:

  1. Demonstrating real-world performance across diverse clinical settings and in rigorous studies as, according to experts and stakeholders, medical providers may be reluctant to adopt ML technologies until its real-world performance has been adequately demonstrated in relevant and diverse clinical settings.
  2. Meeting clinical needs, such as developing technologies that integrate into clinical workflows as medical providers are less likely to adopt ML technologies that do not address a clear clinical need, and many ML diagnostic technologies do not progress from development to adoption for this reason
  3. Addressing regulatory gaps, such as providing clear guidance for the development of adaptive algorithms as gaps in the regulatory framework may also pose a challenge to the development and adoption of ML technologies.

The policy options

As per the report these and many other challenges affect various stakeholders including technology developers, medical providers, and patients, and may slow the development and adoption of these technologies.

To address these implications, the GAO report has identified three policy options to help address challenges or enhance benefits of ML diagnostic technologies. These options include encouraging evaluation of these technologies, improving high-quality data access, and promoting collaboration across stakeholders.

  1. Evaluation: Policymakers could create incentives, guidance, or policies to encourage or require the evaluation of ML diagnostic technologies across a range of deployment conditions and demographics representative of the intended use.
  2. Data Access: Policymakers could develop or expand access to high-quality medical data to develop and test ML medical diagnostic technologies.
  3. Collaboration: Policymakers could promote collaboration between developers, providers, and regulators in the development and adoption of ML diagnostic technologies.
U.S. to leverage AI/ML to predict Ukrain’s weapons and ammo needs

U.S. to leverage AI/ML to predict Ukrain’s weapons and ammo needs

From smart systems to smart weapons, artificial intelligence (AI) is already impacting our lives in different unimaginable ways and this also includes the way how wars are planned and fought today.

With its great transformative power, AI and machine learning (ML) technologies have the potential to push nations to gain a strong position of advantage over the enemies, adversaries and peer competitors on and off the battlefield.

Besides fueling autonomous weapons to stealth drones and including a long list of matériel, AI and ML technologies are behind most of the modern day warfare systems while much is also being explored, almost everyday, to change the face and the pace of warfare.

Something similar is happening at the International Donor Coordination Center, or IDCC, set up at the US military barracks in the German city of Stuttgart where military personnel from different nations including U.S. Germany, Britain and France and others work together to ensure Ukraine gets what it needs to help defend its sovereignty.

As the Russian invasion of Ukraine enters over seven months now, the United States and its allies have been providing the defending nation with weapons, ammunition and other aid it needs to defend itself against Russia. It is at the IDCC where the U.S. officials are now working on AI/ML to predict Ukraine’s weapons and ammo needs.

“That’s partially why the next step is take the large amounts of data that the coalition is collecting and develop AI-driven techniques to anticipate those needs ahead of time, rather than just respond as they come in”, said Jared Summers, the chief technology officer of the 18th Airborne Corps, who is working with the IDCC in Germany.

The IDCC is the focal point of the entire multinational operation where it tracks the needs and delivery of weapons, ammo and other aid extended to Ukraine. In order to make the whole process more efficient and transparent and to predict Ukraine’s weapons and ammo needs, efforts are being made to AI and ML for this purpose. The idea is to translate valuable data into competitive advantage to save precious time and resources.

The invasion continues and so is the support from the allies. Besides training in how to use the weapons and ammo, Western allies at IDCC have helped deliver over $8bn worth of weapons and other aid to Ukraine’s armed forces. These include, but not limited to, thousands of autom

American AI Bringing new focus to privacy enhancing technologies

Bringing new focus to privacy enhancing technologies (PETs)

With technology becoming ubiquitous, the digital world landscape is fast evolving. Amid an ever-expanding data universe, a continuously available digital world has many new opportunities to offer. But there are challenges too. One of these challenges is our changing perspective on privacy.

Therefore, today there are more laws, than in the past, mandating stronger data and privacy protections. According to the U.S. Government Accountability Office (GAO) with the advent of new technologies and the proliferation of personal information, ‘the protection of personal privacy has become a more significant issue in recent years.’

In order to address such issues, the development and adoption of privacy-enhancing technologies (PETs) is becoming increasingly common. These technologies have the potential to safeguard data used in artificial intelligence (AI) and machine learning (ML) applications.

Defining PETs techniques

With the help of privacy-enhancing technologies, insights from sensitive data are gained while protecting privacy and proprietary information of individuals. These emerging technologies help data stay private throughout its lifecycle.

PETs also include maturing technologies, such as federated learning, which allow machine learning (ML) models to be trained on high quality datasets collaboratively among organizations, without the data leaving safe and secure environments.

As per the Internet Association of Privacy Professionals (IAPP), PETs refer to the use of technology to help achieve compliance with data protection legislation.

According to this report of the Federal Reserve Bank of San Francisco, some of the types of PETs include anonymization, encryption, multi-party computation and differential privacy.

Development of a national strategy 

Some recent initiatives including collaboration between the U.S. and the U.K. Governments on prize challenges to accelerate development and adoption of PETs and the White House Office of Science and Technology Policy (OSTP) issuing a request for information (RFI), last month, for public comments to help inform development of a national strategy on advancing privacy-enhancing technologies have brought new focus to PETs. These moves are expected to attract widespread adoption of PETs in the future across various departments and agencies.

As per the RFI, this national strategy will put forth a vision for “responsibly harnessing privacy-preserving data sharing and analytics to benefit individuals and society. It will also propose actions from research investments to training and education initiatives, to the development of standards, policy, and regulations needed to achieve that vision.”

In one of its recent blogs, OSTP explained that the development of privacy-enhancing technologies can provide a pathway toward a future where researchers can analyze a broad and diverse swath of medical records without accessing anyone’s private data. It also noted that PETs can provide a pathway toward this future by leveraging data-driven technologies like artificial intelligence, while preserving privacy.

Academy2

Integrating AI into diagnostics for heart transplant rejection

There are over 3,000 candidates on the heart transplant waiting list in the United States as of March this year. These numbers are available on the Organ Procurement and Transplantation Network’s (OPTN) online database.

According to OPTN, every 10 minutes another person is added to the national organ transplant waiting list–and that might include adult and pediatric heart transplant candidates. The heart transplant surgery has been dubbed as ‘a life-saving procedure’ for patients with end-stage heart disease or when other treatments are not effective. In the heart transplant surgery, a diseased heart is removed and replaced with a healthy heart from a brain-dead donor.

We can understand that the whole process is not without risks. Among many of these risks some are infection, failure of the donor heart, kidney failure and heart transplant rejection by a recipient when the body’s immune system attacks the transplanted organ.

Detecting heart transplant rejection is really challenging as patients may not experience symptoms in early stages of organ transplantation. The good news is: scientists have now started harnessing the power of artificial intelligence (AI) for health risk minimization in case of heart transplant.

A team of Harvard Medical School investigators at Brigham and Women’s Hospital have tried to address this very challenge by leveraging new advances in artificial intelligence (AI). They have created a deep learning-based AI system which promises identifying signs of heart transplant rejection. The AI-powered system called Cardiac Rejection Assessment Neural Estimator or CRANE can help detect not just heart transplant rejection but it can also estimate severity of rejection.

“Our results set the stage for large-scale clinical trials to establish the utility of AI models for improving heart transplant outcomes,” said Faisal Mahmood, HMS assistant professor of pathology at Brigham and Women’s, who is the senior author of the latest study.

In order to train CRANE for detecting transplant rejection, the team of HMS investigators used thousands of pathology images from over 1,300 heart biopsies from Brigham and Women’s. Then they validated the model, using test biopsies and independent, external test sets received from hospitals in Switzerland and Turkey.

Scientists believe that these efforts have paved the way for clinical trials to establish the efficacy of the system fueled by AI to improve heart transplant outcomes.

USA-Department-Of-Defense

DOD in search of disruptive technologies to prevent technological surprise

It is imperative that the Department of Defense nurture early research in emerging technologies to strengthen superiority and maintain advantage in order to prevent technological surprise by peer adversaries–Russia and China, the Undersecretary of Defense for Research and Engineering said.

Heidi Shyu who, in her role as undersecretary of defense for research and engineering, serves as the Chief Technology Officer for the DOD, spoke virtually recently at McAleese Defense Programs Conference saying the United States could not afford a leveling of technology advantage.

Among others, Shyu noted cybersecure artificial intelligence as a potentially disruptive capability that the DOD would like to acquire.

“Furthering science technology and innovation across the department could not be more important in light of the current events unfolding in front of our eyes,” Shyu said while mentioning the Russian invasion of Ukraine. This furthering of disruptive technology, she observed, could be achieved through teamwork with allies and partners.

The National Defense Strategy recognizes the increasingly complex global security environment in the wake of rapid technological advancements and the changing character of war. While the DOD plans to modernize the nuclear triad and focus missile defenses, it will also invest in military application of AI and machine learning (ML) to gain competitive military advantages.

Separately, the DOD’s AI Strategy document also emphasizes adoption of AI by the United States together with its allies and partners to create ‘collective positions of strength’. It aims to lead in the responsible use and development of AI in a lawful and ethical manner. The document also warns that failure to leverage new advances in AI will result in legacy systems becoming irrelevant to the defense of people while it will also erode cohesion among allies and partners besides contributing to a decline in prosperity.

Terming teamwork as ‘asymmetric advantage’, the chief technology officer said: “By working together we can solve the toughest challenges much more rapidly.”

“In order to stay ahead of our adversaries, the department must harness the incredible innovation ecosystem both domestically and globally,” Shyu continued.

Department-Of-Energy-Bio-Fuel

Department of Energy moving faster on AI

In a recent article, Forbes mentioned the Department of Energy as ‘one of the most science, technology, and innovation-focused US federal agencies.’

With its National Labs as home to some of the top fastest supercomputers in the world and focus on transformative technologies, DOE–the lead agency in the civilian use of AI–is indeed a forward-looking enterprise.

In 2019, the DOE established the Artificial Intelligence and Technology Office (AITO) with an aim to transform the department into a world leading AI-enabled enterprise by accelerating the development and deployment of safe, secure and trusted AI.

The DOE recently released its FY22 Program Plan and FY23 Forecast. The plan says, the department will invest in the intersection of AI and big data to improve the reproducibility, transparency, and scalability of AI-based technologies in order to advance its missions from “nuclear security, energy, and science missions to operation and business components in an efficient and cost-effective manner.”

“In 2022, my team is focused on innovative AI governance where responsible and trustworthy AI outcomes are the standard. We do need more human centric integration in the AI lifecycle and a federated catalog of algorithms and data sets so that it is easier to track the impacts of our AI investments, which we are pursuing,” Pamela K. Isom, Director AITO, told Forbes in the interview.

The DOE’s program plan is the AITO’s strategic operating model for the fourth quarter (Q4) of the fiscal year (FY) 2021 through Q4, FY 22 and sets out the priorities established for the department.

The five goals set up by the DOE in the plan are: 1). Responsible and Trustworthy AI/ML, 2). Departmental AI/ML Strategy, 3). AI/ML Council, 4). Strategic Partnership Framework and 5). Workforce Education, Training and Upskilling.

Under these goals, the DOE is focused on an innovative AI governance model based on responsible and trustworthy use of AI and machine learning, initiatives on awareness and publishing guidelines for protecting conversational AI systems from adversarial attacks, establishing AI/ML strategy and AI/ML Council, developing a strategic partnership framework and to work on building a workforce that understands AI’s unique requirements which can develop responsible and trustworthy AI solutions.

At a recent webinar, Pamela Isom also said that it was important to have a robust AI Risk Management Framework as artificial intelligence was being used in critical infrastructure and was, as a result, vulnerable. She also disclosed that the DOE was working on finalizing an AI Risk Management Playbook which is due to release next year.

Academy3

Exploring the World of Quantum Computing

Today we are living in a big data world. It keeps getting bigger and there is no end to it. This exponential growth of data comes both with opportunities and challenges. These two sides could be parallel and intersecting where addressing certain challenges could lead to even more opportunities and so on.

Tapping into these countless opportunities, for the benefit of all, requires mastery over the art of leveraging the true potential of big data. It is this very quest to unlock big data which keeps scientists and engineers on their toes to explore, innovate and invent machines that could really help to process this intelligently and fast.

With their tremendous computing power, quantum computers are today able to execute algorithms more efficiently than classical or even supercomputers to help solve previously unsolved problems–bringing the future into reality.

At the heart of quantum computers are quantum bits or qubits, the fundamental unit of information in quantum computing, through which they have established their supremacy. Unlike binary 1 or 0 bits, quantum bits or qubits can exist as a ‘superposition’ of both 1 and 0. The use of the laws of quantum mechanics allows qubits to encode some calculations exponentially faster than even supercomputers where bits hold the reins of power.

Quantum computing is still in a nascent stage and a lot remains to be explored. However, the following example will help explain how quantum computing is going to transform our world in the not-too-distant future.

According to IBM, a simple battery is more like a complex ecosystem and there isn’t a supercomputer on the plane right now that could accurately simulate what is going inside a battery. Mercedes-Benz, with focus on next generation battery technology, has turned to IBM Quantum to explore how they can simulate the chemical reactions in batteries more accurately in order to further explore new materials to create more efficient batteries. With the aim to reduce carbon impact on the planet, their goal is to turn the entire Mercedes-Benz vehicle fleet carbon neutral by the year 2039.

Quantum computing isn’t just about future cars as it also holds promise to reshape different sectors and industries including the healthcare industry. As per IBM, in the healthcare industry quantum computing could enable a range of disruptive use cases for providers and health plans by accelerating diagnoses, personalizing medicine, and optimizing pricing.

American-Institute-of-Artificial-Intelligence-Slider1

JAIC starts AI adoption journey

In a bid to accelerate Artificial Intelligence (AI) adoption, the Joint Artificial Intelligence Center (JAIC) has launched the AI Adoption Journey series.

Through this new weekly series, the JAIC, a focal point of the Department of Defense (DoD) to harness the potential of artificial intelligence, aims to share lessons learned, best practices, and resources with the users.

The suite of videos, documents, and other information will be available on the JAIC website for everyone to explore in order to help map out the critical milestones along the journey to modern and relevant AI-enabled capabilities. The new series is in line with the JAIC’s efforts to lower the barriers of AI adoption with the support of its innovation partners from the academic and tech industry.

In his special video message at the launch of the AI Adoption Journey series, Lt. Gen. Michael Groen, director JAIC, said the department was moving forward toward an AI-enabled future. Terming it a transformational moment in history, Michael Groen said: “Making this transformation successful is a core necessity if we’re going to compete on the battlefield in any domain.”

The unclassified summary of the DoD’s AI Strategy also emphasizes adoption of AI by the United States together with its allies and partners to create, what it terms, ‘collective positions of strength’. The department’s AI strategy focuses on thoughtful, responsible, and human-centered adoption of AI.

The strategic approach that will guide DOD in its efforts to accelerate AI adoption include, but not limited to, cultivation of a leading AI workforce, delivering AI-enabled capabilities that address key missions, engaging with commercial, academic, and international allies and partners and leading in military ethics and AI safety.

The DOD’s AI Strategy directs the department to incorporate artificial intelligence into decision-making and operations to protect the safety and security of U.S. citizens, to derive military advantage, to improve readiness and accuracy and to enhance mission precision–in order to reduce collateral damage.

Academy22

Human-centered AI adoption

From our daily online to offline experiences, the impact of Artificial intelligence (AI) on our everyday lives is no longer a secret. Apart from that, today AI is fast transforming our world with promises for a better future. And this is despite the fact that the true potential of AI technology is yet to be unlocked.

AI isn’t just a good fit for finance as the nascent technology has proved its worth in almost every sector and industry including healthcare, security, energy, agriculture and space science to name a few. Its application ranges from medical diagnostics and autonomous vehicles, to AI-powered weapon systems and more.

While we are today seeing numerous actual benefits of artificial intelligence, it is perhaps AI’s unseen potential that has prompted countries, companies and firms to accelerate its adoption in order to gain and strengthen technological competitiveness and superiority.

Fear of the known

The age of algorithms is here. AI and machine learning (ML) technologies continue to become a larger part of our everyday life even if we don’t notice it. From government departments, agencies to companies and firms everyone seems to have jumped into the race to capture AI’s opportunities in order to harness its potential for advantage.

But this AI adoption journey is not easy because of the inherited challenges as well as human-created barriers. Of these challenges and barriers, fear is one big impediment. The fear is: human workers will become obsolete because of AI/ML-powered machines. The good thing is this ‘fear of the known’ is easy to fight and win. For this, experts suggest investing in AI education in order to empower workers to adapt to using artificial intelligence. The idea is to prepare them for the rapidly changing world of work.

Towards an AI enabled future

There is increasing focus on the role of AI as augmentation and to develop an AI-oriented culture within the organizations. For example, the Department of Defense (DoD) emphasizes the role of AI as augmentation rather than as the replacement of humans. The department is all for human-centered adoption of AI. As per the DoD’s AI Strategy document the strategic approach or focus areas that guide DoD in its efforts to accelerate AI adoption include cultivation of a leading AI workforce.

“When AI is adopted broadly, employees up and down the hierarchy will augment their own judgment and intuition with algorithms’ recommendations to arrive at better answers than either humans or machines could reach on their own,” write the authors in their article “Building the AI-Powered Organization” in Harvard Business Review. They also suggest that AI has the biggest impact when it’s developed by cross-functional teams with a mix of skills and perspectives.