As someone who relates deeply to robot/android characters, I often imagine monologuing as one as a means of explaining interesting topics (... ones I find interesting, anyway).
One thought I had is how a machine might justify their own existence of "soul" - assuming, of course, that this machine intelligence (in whatever form it may be), is as self-aware as we are and has a similar comprehension (or willingness to comprehend) such abstract concepts.
In a sense, both human brains and computers have in common is their emergent complexity. While the parts perhaps are incredibly different, they eventually come together in a similar, emergent fashion.
Humans are carefully organized bits of water. Machines are carefully organized bits of sand. If you break either down to small enough components, you're left with similarly "simple" things. A human brain is just cells, which are just biochemistry, which is just Applied Chemistry, which are just molecules, which are just elements. A mechanical brain are nodes (is that what they're called?), which are just logic gates, which are just transistors, which are just chemistry, which are just molecules, which are just elements.
If a carefully organized bag of seawater and carbon can be capable of such deep, philosophical questions - surley a carefully organized pile of sand and fire can do just the same. It's only a matter of complexity.
(I also wonder how a sentient machine would handle mathematics? Any computer is basically "built" out of mathematical functions. Would it be similar to admiring one's own biology? Would an android find a sort of kinship in comprehending vastly complex mathematical formulas? I wonder...)