If we are talking non-sentient robots, then you can program them with unbreakable laws.
But I think sentience carries with it the ability to break the laws. Look at humans, for example. I think free choice is required for sentience. Truly sentient computers will not be programmed. They will be endowed with a brain structure, and that structure will restructure itself as it learns. That is what we do...we restructure our brain cells as we learn. Learning means we change physically.
Unless you think that true sentience cannot be obtained outside of biological constructs (outside of humanity), this will be a big problem some day. If we are unable to build true silicon-based AI, then that would prove the existence of the human soul. I don't think God will let us off that easily. He never has let us out of the moral effects of our decisions in the past; why would he start now? And, if there is no god, then sentience merely emerges from complexity, and there is nothing to stop us from creating complexity that leads to sentience. Either way, I think sentient artificial intelligence is an inevitability.