Hitachi Research and Development (R&D) Group is working to apply AI across a wide range of domains. Underpinning these efforts is a shared stance that the ultimate decision-making responsibility lies with humans. Rather than allowing AI ethics to become a constraint on technological innovation, the Group is working to accumulate practical insights into how AI and humans can interact in ways that are accepted by society. The Group is working to figure out a trust relationship between humans and AI through concrete initiatives based on Hitachi’s AI Ethical Principles in the company’s R&D operations led by the R&D Group’s AI Ethics Committee.

For an overview of the R&D Group’s AI ethics initiatives, see: To What Extent Can AI be Trusted?—How Hitachi Research and Development Group is tackling the issue of AI ethics and social implementation

To what extent can AI decisions be trusted?
—Explainable AI and a “humans take responsibility” design philosophy

Demand for AI is expanding into mission-critical operations, including credit screening in finance, support for medical diagnoses, and control of societal infrastructure. At the same time, the “black box” nature of AI poses a significant challenge. An AI’s ability to explain why it reached a particular decision is directly linked to its trustworthiness. Masayoshi Mase, Chief Researcher at the Language & Audio Intelligence Research Department, Advanced Artificial Intelligence Innovation Center, Digital Innovation R&D, has been working on explainable AI (XAI) for nearly a decade.

XAI explains an AI’s decisions by identifying the factors and examples that influenced its output. Conventional approaches would typically evaluate the importance of input variables by interchanging them and observing the corresponding changes in predictions. However, this method can include data that would not occur in reality or are not logically possible, making it difficult to generate truly reliable explanations. To address this issue, Mase and his team developed a method known as Cohort Shapley, which evaluates key decision-making factors using only observed data, enabling more realistic and reliable analysis. Because model outputs can be explained based on actual data, this method also has potential applications in areas such as fairness evaluation.

While XAI based on observed data provides one approach to addressing the black box problem, Mase points out that when it comes to XAI, there is no absolute “correct” explanation. This makes it essential to clearly define the objective of the explanation. In reviews conducted by the AI Ethics Committee, feedback highlighted that ensuring AI trustworthiness requires clear goal-setting through expert involvement and human responsibility for final decisions. “The challenge of implementing AI ethics in practice lies in continuously operating and improving the governance framework,” says Mase. “As AI technology evolves from predictive AI to generative AI, agentic AI, and even physical AI, we will continue reviewing and refining our approach to explainability,” he adds expressing enthusiasm for reliable AI.

Click here to watch the video (in Japanese)

画像: AIの説明性とAI倫理実践の考え方 youtu.be

AIの説明性とAI倫理実践の考え方

youtu.be

To what extent should AI be allowed to support the assessment of humans?
— Mutual understanding and AI ethics in AI personality estimation during interviews

As labor markets become more fluid and “the right person in the right position” becomes increasingly urgent, there are limits to how well we can reduce variability in talent assessments driven by interviewers’ experience and subjectivity, and fully and fairly capture each individual’s unique strengths. In response to this challenge, Hitachi is conducting joint research with the University of Tokyo to develop an AI that can estimate personality traits based on nonverbal information such as facial expressions, body language, and the volume and tone of their voice. Hitachi defines this technology not as “a substitute for final judgments,” but rather as “a support tool to deepen mutual understanding.” In promoting its use, Hitachi aims to realize an environment where both parties can share their characteristics with trust and satisfaction by thoroughly taking ethical considerations in both technical and operational aspects, such as verifying fairness using diverse data*1, explainable algorithms designed to avoid black-box estimation*2, and safeguards to protect privacy, including the right to opt out and measures to prevent unintended collection of information about the individual*3.

Takashi Numata, Chief Researcher at the Multimodal Sensing Research Department, Measurement Integration Innovation Center, Sustainability Innovation R&D, explains: “Focusing on nonverbal information such as body language rather than conversation content makes it easier to support mutual understanding even when the questions and dialogue are fluid.” This technology is designed with explainability in mind, using a rule-based estimation algorithm rather than a black-box approach, and it is expected to be used as a tool to support mutual understanding. However, as it is a technology involving AI supported human assessment, careful consideration is required from an AI ethics perspective.

During exit-stage reviews prior to public disclosure of the technology, discussions with the AI Ethics Committee brought up concerns such as protection of personal privacy and fairness. “We reaffirmed the need for Hitachi to communicate a clear message that before offering this as a product or service, we are committed to establishing mechanisms to prevent privacy leaks and ensuring fairness, explainability, and transparency, and we are proceeding with our R&D efforts based on this understanding,” says Numata.

In order to utilize personality profiling AI as a tool to support mutual understanding, it is essential to clearly communicate how and for what purposes it is to be used, particularly from an AI ethics perspective. Research and development soundly rooted in AI ethics is vital to ensure that both interviewers and interviewees can feel comfortable using personality estimation AI.

Click here to watch the video (in Japanese)

画像: 面談時の表情や身振り、発話のトーンなどの非言語情報を用いて、性格特性を推定するAI技術 youtu.be

面談時の表情や身振り、発話のトーンなどの非言語情報を用いて、性格特性を推定するAI技術

youtu.be

*1 From the perspective of fairness, it is important to verify—using datasets that include diverse attributes—whether estimation bias arises due to differences in attributes such as nationality, race, or gender, and to implement guardrails to avoid unfair treatment of specific attributes (e.g., excluding data that includes unverified attributes from the scope of estimation through appropriate design and operational measures).
*2 From the perspective of explainability, this technology develops a rule-based estimation process designed with explainability in mind, enabling the basis (rationale) for estimations to be explained.
*3 From the perspective of privacy and safety, it is necessary to consider not only data leakage but also the handling of information that an individual did not intend to be collected. Data collection should be limited to the minimum necessary for the intended purpose of use, and estimation results should be treated as supporting information for humans to make final decisions. It is also important to provide an option that ensures individuals do not suffer disadvantages if they do not consent to use, and to ensure individuals have rights to review estimation results and request corrections or deletion, as well as to raise objections.

Ethical design when applying AI to societal infrastructure
— The challenge of community-driven virtual power plants (VPPs)

How should AI ethics be incorporated into systems that reflect the will of communities? One potential direction can be seen in research on virtual power plants (VPPs), which aggregate multiple power generation facilities and operate them as a single power plant. One such initiative related to VPPs, which aim to achieve both a stable power supply and environmental performance, is the development of VPP systems that reflect the values of the local community.

Shinji Ishihara, Chief Researcher at the Autonomous Control Research Department, Mobility & Automation Innovation Center, Digital Innovation R&D, and his team are working to develop an AI for VPP control that incorporates such community preferences. In community-driven VPP models, priorities such as environmental performance and cost differ by region. The challenge lies in reflecting the preferences of local communities, such as cities or towns, in the AI while maintaining stable operation without power outages. By designing the system architecture based on Hitachi’s AI Ethical Principles, the team has enabled a control mechanism that aligns as closely as possible with community policies while ensuring that power outages are prevented.

Critical controls directly related to grid safety, such as frequency stabilization, are handled using conventional physical models to guarantee stable operation of the VPP. This approach ensures that the system delivers value by preventing power outages. Meanwhile, AI is used for decision-making processes such as which power plants to prioritize. Through “preference learning,” in which residents and municipalities compare operational outcomes and select their preferred options, the AI gradually aligns its decisions with community values. In this way, the AI ensures stable operation of the power grid as key societal infrastructure while also incorporating the preferences of the operators. This enables decision-making that is not only aligned with AI ethics but also higher-level societal values.

Click here to watch the video (in Japanese)

画像: 地域価値を反映する仮想発電所運用AI技術 youtu.be

地域価値を反映する仮想発電所運用AI技術

youtu.be

Autonomous driving cannot be achieved by AI alone
—Achieving safety and reliability through human-AI collaboration

With a growing shortage of drivers amidst Japan’s aging population, maintaining regional public transport systems is becoming increasingly difficult. While new mobility technologies such as autonomous driving are expected to provide solutions, their implementation involves multifaceted ethical challenges, including safety and privacy. At Hitachi, research is underway to combine autonomous driving technologies with AI and digital solutions to evaluate regional transportation systems and ensure their safe and efficient operation and management. Again, these research and development efforts are based on Hitachi’s AI Ethical Principles.

Tsuyoshi Kitamura, Researcher at the Autonomous Control Research Department, Mobility & Automation Innovation Center, Digital Innovation R&D, and his team have been working to conduct field tests of autonomous vehicles in order to study driving models that optimize human-AI collaboration. Many AI technologies are essential for autonomous driving, including route planning via high-precision maps, environmental perception using cameras and lasers, and detection of obstacles to avoid an accidental crash. “We are developing AI functionality to accurately recognize people and vehicles on the road,” Kitamura explains. “As we proceed with our research, issues such as management of personal data and risks associated with AI are discussed by the AI Ethics Committee, and we consult with the relevant internal parties at each stage.”

In line with the AI Ethical Principles, field tests include both a driver with the ability to switch to manual control and a safety officer responsible for notifying the driver in case of trouble. This two-person structure ensures safety during testing. Efforts have also been made to design the human–AI user interface (UI) appropriately. When abnormalities are detected, the method via which information is communicated is critical to ensuring that the safety officer can understand the situation and maintain safety. In the event of an emergency, simply displaying what the AI has output is not enough for humans to take actions. By viewing autonomous driving as a system that includes humans, the team is focusing on improving reliability through a UI design that seamlessly connects humans and AI.

Click here to watch the video (in Japanese)

画像: 地域交通全体の設計・評価・実装等の取り組みを効率化するデジタル基盤技術 youtu.be

地域交通全体の設計・評価・実装等の取り組みを効率化するデジタル基盤技術

youtu.be

How to integrate human experience and AI
—Ethical design to support a culture of safety

Safety practices in industrial settings have traditionally relied on the experience of skilled workers. However, with labor shortages and aging workforces making knowledge transfer increasingly difficult, research is underway to apply AI to such areas that require advanced expertise. Ryosuke Odate, Researcher at the Vision Intelligence Research Department, Advanced Artificial Intelligence Innovation Center, Digital Innovation R&D, is working to develop technology utilizing the metaverse and AI to support RKY (acronym for risuku kiken yochi [risk and hazard prediction]) activities. As this technology involves handling safety-related information, it is closely linked to Hitachi’s AI Ethical Principles.

The ways in which AI was applied were carefully reviewed within the research department from the early stages of research, and for actual applications, the technology was verified using a checklist provided by the AI Ethics Committee. Finally, the technology underwent a review by the Committee before it was announced to the public in a news release. “During the initial stages of research, the scope to which we would expand the technology was unclear. The AI Ethics Committee provided detailed feedback, such as potential conflicts with EU regulations, which allowed us to further study how the technology would interact with legal systems,” says Odate. The system we developed uses photographs of the workplace and a database of past cases to enable the AI to identify risks. This provides a foundation for us to willingly adapt the technology to constantly evolving domestic and international regulations.

During the Proof of Concept (PoC) phase, studies demonstrated that the system was able to extract significantly more information than conventional whiteboard-based RKY methods. It is also envisioned that the risk and hazard prediction metaverse can be used in environments that are difficult for humans to access, such as nuclear power facilities during decommissioning. Through careful discussions on AI ethics, the technology was designed with an awareness of the trade-off between safety and efficiency. In this way, AI ethics are playing a key role in guiding the research direction during Hitachi’s efforts to develop a new safety-oriented culture that integrates human experience with data.

Click here to watch the video (in Japanese)

画像: 日立の次世代AIエージェント「Naivy」を活用し、現場の安全性を高めるリスク危機予知支援システムを新開発、現場の安全性・効率性向上の効果を実証 youtu.be

日立の次世代AIエージェント「Naivy」を活用し、現場の安全性を高めるリスク危機予知支援システムを新開発、現場の安全性・効率性向上の効果を実証

youtu.be

Related Links

This article is a sponsored article by
''.