As a researcher in Artificial Intelligence, Cognitive Science and Robotic Engineering, the way HAL's speech alters and regresses makes a lot of sense. The machine I use has 1000 cores, and the approach has perceptual and motor processing distributed as well as higher level processing. To explain in more detail...
To build an AI requires lots of processors, which are distributed across multiple boxes with multiple modules - still today they can look pretty similar to the cabinets and the boards we see being removed. Functionality is distributed across these many processors, including higher function (linguistic, ontological, learning and reasoning capabilities), long and short-term memory (life memory and working memory), and lower function (perceptual/sensory-motor abilities).
I regularly see movies/animation slowed down on my computer when there is not enough CPU, thinking/reasoning is slow when there are not enough processors available. The mammalian brain is able to shift functionality, with difficulty, into nearby areas, and ablation studies have shown incredible plasticity and resilience and ability to transfer functionality to a relatively small volume (I recall studies that went as low as 10%). Some studies deliberately seek to understand the human brain by turning off parts, and we have natural experiments resulting from strokes and accidents - and the slowing down and loss of some grammatical is well known in human aphasia (but without the lowering of pitch).
My approach to building an AI is like HAL's based on learning (as explained mainly in the 2010 sequel) and psycholinguistics shows the first memories are in some senses deepest. My slogan in the 1980s and 1990s was "HAL by 2001" - but I didn't get the funding required to achieve that!-) But basically, back then, we already knew what to do but since a review in 1970 of US funding for AI research and language technology, funding has largely been unavailable in this area due to slow progress, and the inability to deliver promises. IBMs developments with Big Blue and Watson show the capability required for playing chess or answering questions are there. It's the kind of things a two-year old can do that computers haven't learned yet - because they need to be trained like a baby to learn to interact in their social and physical environment.
The slowing of speech makes perfect sense. The lowering of pitch makes sense if speech synthesis is performed with distributed waveforms and due to unexpected loss of processing power, the synthesis of the waveform is slowed down because the required CPU power is not available but is being used for other functions, and likely is being performed using jury-rigged pathways, and components not optimized/designed for this purpose.