If your dad is Catholic, I would highly recommend A Canticle for Leibowitz.
In my opinion, it stands up very well as Sci-Fi in general but it is from the perspective of a Catholic abby in Utah throughout three stages of the post-apocalypse. It has strong “pro-religious” themes in terms of the value of vocation and duty in times of hardship as well as “pro-science” themes of preserving dangerous knowledge, even if it is dangerous.
The biggest con is that it is heavily steeped in Catholic imagery, so if you don’t like that or your dad wouldn’t, you shouldn’t bother, but if you want to hook a strong Catholic on Sci-Fi, and they are open to Sci-Fi at all, I would heartily recommend it. Another con is that it isn’t particularly short, if that may be an issue, 320 total pages.
I agree with everything you said I only want to add that there is kinda one or two ways for the AGI problem a la Sci-Fi to happen.
By far the most straight forward way is if the military believe that it can be used as a fail safe in MAD scenarios, i.e. if they give the AI the power to launch nuclear ICBM’s a la War Games. Not very likely, but still not something we want to dismiss entirely. This is also a problem with regular AI and LLM’s.
The second, and, in my opinion, more likely scenario is if the AI is able to get a large number of people to trust it implicitly and then use seemingly unrelated benign actions from each of them to do something catastrophic.
Something you may notice about these two scenarios is that neither one of them can be “safeguarded” in the code, only by educating people on the proper usage of and posture to have when handling AI.