The use of artificial intelligence (AI) continues to attract significant interest across society, driven by the rapid rise of generative AI. From personal use to business applications, AI is no longer just a partner for brainstorming—it is increasingly taking on diverse roles such as generating programs and creating visual content. At the same time, as AI becomes increasingly integrated in our society, in addition to factors such as performance and efficiency the question: “Can AI truly be trusted?” is growing ever more pressing. How do AI-generated outputs impact individuals and society? Who should ultimately take responsibility for decisions made by AI, and how? These questions are becoming more complex as the technology continues to evolve.

Hitachi Research and Development (R&D) Group is working to address these challenges by constantly updating its approach to AI ethics, looking beyond generative AI to next-generation technologies such as agentic AI and physical AI. Rather than treating AI ethics as a set of abstract principles or rules, the Group views it as a practical discipline that should be constantly applied in its day-to-day research and testing efforts. A key feature of Hitachi’s approach is that AI ethics is not treated as a separate issue from technology development. Instead, ethical perspectives are incorporated in its researcher training programs and in evaluation of research topics, and the Group continues to review and improve its operational and governance frameworks in anticipation of evolving technologies such as generative AI, agentic AI, and physical AI.

AI ethics is not a one-off matter of setting rules, but a continuing evolution
―Building a framework for maintaining trust in an ever-evolving AI society

As AI becomes increasingly adopted in our society, questions regarding the ethical issues surrounding AI and the trustworthiness of its outputs will inevitably become more prominent. Against this backdrop, ensuring the reliability of AI is an essential challenge for companies engaged in the development of related technologies. Hitachi established its AI Ethical Principles in 2021 as a companywide guideline. Under this framework, the R&D Group goes beyond simply complying with the defined principles and instead focuses on how they can be continuously applied in day-to-day R&D efforts. A defining feature of this approach is an emphasis on the constant evolution and refinement of how the AI Ethical Principles are applied in practice.

Maintaining an up-to-date focus on AI ethics considerations is no simple task. It requires not only day-to-day application and feedback within research settings, but also the ability to continuously respond to changes expected of AI ethics with evolving AI technologies and societal trends. This approach ensures that the reliability of Hitachi’s AI ethics framework extends beyond current AI applications in society to encompass emerging technologies such as agentic AI and physical AI, as well as potential future technologies that have yet to take shape.

“AI ethics is not something that is complete once you have established principles, explains Toshiki Sugawara, Chief Engineer at the Technology Management Center, Technology Strategy Office. “It is a foundation for creating value through ongoing application. This is why the way we operate the framework in practice is so important.” During the research and development stage, it can be difficult to accurately predict exactly how the technology will be used in the future. It is therefore essential to incorporate ethical perspectives from the initial stages and to establish mechanisms that allow for review and reassessment as development proceeds.

Hitachi’s AI Ethical Principles consist of three standards of conduct, covering the planning phase of development and use of AI for the realization of a sustainable society, the social implementation phase based on a human-centric perspective, and the maintenance phase to ensure long-term value, as well as seven items to be observed which apply to all stages. In the R&D Group, these standards and items are not simply treated as a checklist but integrated throughout the research lifecycle, from initial planning to public release. Hitachi R&D Group has two pillars to support this approach: Researcher Education, which aims to enhance AI ethics literacy through training and workplace discussions starting from the onboarding stage for new researchers, and Research Theme Evaluation, which assesses each project at both the initial planning stage (inlet) and prior to external publication of its outcomes (exit).

The R&D Group’s AI Ethics Committee plays a central role in Research Theme Evaluation. This Committee is comprised of representatives selected from each research center, bringing together members with a keen awareness of the on-site realities across diverse research fields to incorporate diverse perspectives. Each research theme first undergoes a self-assessment by the researchers involved, after which the Committee reviews the results and provides feedback. This process emphasizes an ongoing cycle of feedback and improvement rather than simply functioning as a one-way review.

“Committee members join reviews as representatives of their respective research centers,” says Sugawara. “Our goal is to ensure that AI ethics is recognized as a shared responsibility by all workplaces across the R&D Group.” Because AI usage varies depending on research domains and themes, there were initially differences in perspectives regarding AI ethics. However, after conducting repeated review and feedback cycles, a shared understanding has gradually developed across the R&D Group and practical guidelines have taken shape.

In addition to reviews by the Committee, the use of generative AI has also expanded in recent years. In what is referred to as “AI ethics reviews,” generative AI is used to support tasks such as performing checks and drafting comments for low-risk cases. For cases with moderate or higher risk, evaluation is conducted by workplace representatives or the AI Ethics Committee. However, in all cases the final decision is ultimately entrusted to humans to ensure the reliability of evaluations. While efficiency and automation are important, there is no single ”correct” answer in ethical judgments. This is why humans have a critical role to play in judging, verifying, and giving feedback on cases.

Before research results are released externally via news releases or external demonstrations, an additional exit review is incorporated into the operational process to identify and resolve any doubts or concerns. Furthermore, feedback from subsequent demonstrations and external stakeholders is incorporated into future research as well as used to make improvements to Hitachi’s operating framework. In this way, Hitachi’s AI Ethical Principles are not a one-off matter of creating rules, but rather a constantly evolving framework that underpins the reliability of the company’s AI, products, and services.

Click here to watch the video (in Japanese)

画像: AI倫理の社会実装を推進する研究の現場 www.youtube.com

AI倫理の社会実装を推進する研究の現場

www.youtube.com

Safeguarding society from constantly-evolving AI technology
— Designing “guardrails” for the physical AI era

As AI becomes increasingly integrated in society, the challenge of balancing technological advancements with ethics and reliability concerns is once again coming to the forefront. “When it comes to AI ethics, simply establishing principles alone is not enough. What is needed is practical AI governance that anticipates technological evolution,” explains Daisuke Matsubara, Department Manager of Language & Audio Intelligence Research Department, Advanced Artificial Intelligence Innovation Center, Digital Innovation R&D.

Hitachi’s AI research extends beyond the digital realm. Building on its decades of experience in physical domains such as societal infrastructure and industrial systems, Hitachi has integrated machine control technologies and design and operational know-how with AI, to develop “physical AI” technologies that operate in the real world. AI is used across a wide range of phases, from design to manufacturing, operation, and maintenance, with applications including construction process simulations for power plants, real-time monitoring of railways, traffic signals, and infrastructure, centralization of production data, and improved customer support. In this way, Hitachi’s approach to AI utilization in its research and development is grounded in real-world applications.

Meanwhile, AI technologies are evolving rapidly. In 2023, the scope of information retrieval and summarization using generative AI-powered chatbots expanded through the introduction of retrieval-augmented generation (RAG), while in 2024, AI agents capable of executing user-defined tasks emerged. Looking further ahead, agentic AI that autonomously plans processes and selects appropriate tools is predicted to become increasingly commonplace. At the same time, the integration of generative AI with robotics technologies is broadening the scope of application of physical AI.

As AI becomes more autonomous and capable of learning, the nature of the associated risks also changes. “Going forward it will become increasingly important to design and operate AI based on clear principles,” notes Matsubara. Hitachi’s efforts extend beyond the formulation of AI Ethical Principles to implementing sound AI governance practices in anticipation of further technological evolution. These efforts are aligned with international discussions such as the G7 Hiroshima AI Process and global AI Safety Summits, reflecting Hitachi’s commitment to introducing advanced AI technology to society in a responsible manner.

In the context of physical AI, for example, robots autonomously perceive their surroundings, determine safe operating boundaries, and control their movements accordingly. Supporting this functionality are “AI guardrails,” which incorporate protective control, safety analysis, and AI reliability enhancement technologies. The system incorporates mechanisms to suppress hallucinations by preventing the AI from providing answers when sufficient justification cannot be presented. In addition, monitoring rules are applied to inputs and outputs outside the AI system to ensure the overall safety and reliability of AI-enabled systems. Hitachi is also conducting joint research with the Fraunhofer-Gesellschaft Institute in Germany and utilizing safety analysis methodologies such as the System-Theoretic Accident Model and Processes (STAMP) framework.

Creating highly reliable AI requires not only enhancing the capability of the AI itself, but also establishing complementary external systems that support reliability. “Enhancing large language models (LLMs) alone is not enough to ensure sound AI ethics and reliability,” says Matsubara. “It is also important to focus on the maintenance of external data, including prompts and documents used in RAG, as well as overall operational management.”

Click here to watch the video (in Japanese)

画像: 生成AI時代のAI倫理 www.youtube.com

生成AI時代のAI倫理

www.youtube.com

For more information on how Hitachi is approaching AI ethics in its individual research themes, see: How Have AI Ethics Transformed Hitachi’s Approach to Research and Development? —Meet the researchers working to enhance reliability through AI

Related Links

This article is a sponsored article by
''.