Would it be possible for an AGI to learn beyond human input in all the fields it knows? Which narrow AI’s have shown Super Intelligence already? How can we begin to imagine what goals an ASI would set itself? Would an ASI ever dare to be honest with humans? Would we understand the findings of looking into the ‘black box’ of an ASI’s working?