Professional Role Identity in the Age of Artificial Intelligence: An Example of the Automated Long-Term Care Case-Mix System

Tuesday, 8 July 2025: 11:00
Location: SJES020 (Faculty of Legal, Economic, and Social Sciences (JES))
Oral Presentation
Hsiao-Mei JUAN, National Chung Cheng University, Taiwan
Taiwan is expected to enter a super-aged society by 2025, leading to an increasing demand for long-term care services. With changes in family structure and function, the responsibility for caregiving has diversified and is no longer limited to household-based support. In response to these challenges, Taiwan established the "Long-Term Care Service Network and Long-Term Care Service Act" in 2015 and launched the "Ten-Year Long-Term Care Plan 2.0" in 2016. The Ministry of Health and Welfare has developed an information platform that integrates an automated multi-assessment scale and a long-term care case classification system. Additionally, a mobile app version of the assessment tool was introduced to assist case managers in conducting home visits.

Needs assessment is a critical part of this process, determining the level of disability, the scope of services provided, and the care plan. The automated multi-assessment system aims to improve efficiency and ensure fairness by using objective standards, avoiding biases and pressures from subjective judgments. However, this automated assessment system has raised several concerns. This study focuses on how the system affects the professional roles and identity of practitioners.

Noordegraaf (2007) noted that professionalism and professional roles are shifting from being pure to becoming hybrid, which has implications for how professionals see themselves and their work. This study, through interviews with practitioners, explores several questions: How does the automated system impact professional identity and roles? Does it limit staff autonomy in decision-making or evaluation? How do case managers and organizations respond when discrepancies arise between "system evaluations" and "human assessments"?

The conclusion emphasizes that automated decision-making models must be designed with transparency and explainability. Furthermore, training programs for practitioners should incorporate relevant ethics courses and supervision systems to ensure professionalism and ethical responsibility in practice.