This study evaluates the translation quality produced by AI-powered translation systems (AIPTSs), specifically ChatGPT, when translating the Farewell Sermon—a significant Islamic text. Utilizing House's (2015) model of translation quality assessment, the research analyzes the translation at various levels, including text, register (field, tenor, mode), and genre. The analysis reveals that AIPTSs often produce lexical and syntactic inaccuracies, hindering the capture of the sermon's intended religious voice. While the broad themes of the Farewell Sermon are generally conveyed, subtleties in religious terminology are frequently missed, and sentence structures tend to be translated literally without deeper contextual understanding.
Furthermore, the study identifies 81 overt translation errors committed by AIPTSs in translating the Farewell Sermon into English. The most frequent errors include "creative translation" (24 errors), "not translated" segments (15 errors), "distortion of meaning" (9 errors), "slight change in meaning" (8 errors), and "breach of the source language system" (7 errors). Less noticeable were "significant change in meaning" (6 errors) and "cultural filtering" (4 errors). Additionally, the research introduces the concept of "software intervention," referring to technical errors beyond the focus of House's model, which can either positively or negatively affect the translation depending on whether the error improves readability without distorting meaning. The findings suggest that while House's model is somewhat suitable for assessing AIPTSs, there is a need for further refinement to address technical system-induced errors.