Explainable AI
We want to prevent the accidental harm caused by superintelligence AI systems.
We can reasonably argue that ongoing research in machine learning, combined with advances in computational infrastructure, may enable machines to surpass human capabilities in most domain-specific tasks. Although we might be a few decades from this, We would be surprised if this did not happen.
There are many arguments around what the implications of this will be. Some argue that superintelligent machines might pose an existential risk to humanity. Some say that machine intelligence and its goals are primarily independent. Therefore we can build machines to pursue any well-defined objective. Some also argue that machines will refine their knowledge to increase the total silicon-based intelligence rapidly.
There are many arguments around what the implications of this will be. Some argue that superintelligent machines might pose an existential risk to humanity. Some say that machine intelligence and its goals are primarily independent. Therefore we can build machines to pursue any well-defined objective. Some also argue that machines will refine their knowledge to increase the total silicon-based intelligence rapidly.
We are, therefore, currently working on machine explainability techniques that allow humans and machines to speak the same language. These algorithms can then build on human common sense, making machines more robust. It also helps us quantify and therefore benchmark their performance.
Researchers : Debargha Ganguly, Debayan Gupta