Suppose your program fails.
As evidence for the reasonableness of this assumption I Point to: Casey Liss (@caseyliss) described an instance of this occuring, in ep 54 goto fail; ~29:00 mins into the epsiode.
If there is a hidden fault (a memory leak) produced by the program, presumably if you were able to watch the activity on the memory in relation to the commands that are being executed that cause the memory activity, if the two clocks that govern both were appropriately synchronized, then you might be able to identify the particular cause of the total interaction of all the memory leaking by considering how much of the allocated memory was actually due to that particular sequence of code (appropriately built into a functional equivalence class where if there is some output put into it there will be a bunch of other outputs that would produce the same output when input as well. I.e., a many to many mapping).
So in this case suppose that every allocation was automatically analyzed with respect to the code that generated it, could you then not use that as an error signal? I.e., you have identified what the problem is (memory leaking), and have allocated appropriate responsibility portions to all of the pieces of code, and then attempt to analyze there to see if they come to a common source point.
I.e., if you find the root of a many tiered many faced superorganism of causal relations (like oaks, or aspen?, or strawberries? See Peter Godfrey smith post for details about this, or I think I also linked to one of the chapters that includes this information), and you excise the root, how much will wither?
It is it this way that bugs, diseases, viruses infect our systems and have effects that are far reaching for organisms much larger than itself. But if we can identify these things automatically in the real world where we don't have access to literally all of the details, how can we not identify these things automatically in our virtual worlds (simulated logic sPace$) where we do have access to literally all of the details. This is a Turing no worries task, because in order for something to be sent it needs to have already been encoded.
This is also the case for the history of eve online book, which you should check out today is the last kickstarter day for it, and it'll be a text that I will probably be pulling a lot from in the time following its release (and possibly before if they open source their data).
It will also be the goal there, where literally every interaction that could have possibly mattered, by definition could have been recorded perfectly, to undertake the task of automatic abduction in a complex space. I hope to be able to build a model that in theory could be applied to this dataset that will output the relevant root causes in an actually mathematically defined universe.
But a history of any series of computations is possible. And that history may be able to identify patterns that suggest fixing. Such an abductive strength could even help create automatically improving code. If you want to get to really the highest possible level of programming, it is to be able to automatically change the particular implementation of some code on the fly(like field medicine, but field enchancement), to avoid creating obvious errors and to notify the developer when it's too injured and confused that it doesn't know how to fix itself with super high confidence.
Code that would have caught Casey's bug, and fixed it for him, that is about as high level as you can get. If you had a programming language that did that it could hardly be said to not be substantially easier for novices to begin coding.
For better or worse.