As Artificial Intelligence (AI) and the big data behind it become ever more sophisticated, AI increasingly becomes a more integral and familiar tool for healthcare diagnoses and decision making. AI technologies are being piloted and implemented as the assumed and “believed-to-be” future of healthcare. The proliferation of AI, in turn, gives cause for concern in areas such as healthcare ethics, decision-making biases, and the definition of human dignity. It’s well established that clinicians can be influenced by subconscious bias, but the biases in health data are also common and can be life threatening when not addressed properly. Biased health data is one of the great threats for AI. Apart from its intent to aid society, AI will continue to perpetuate bias in global health due to its contribution on accessibility and accuracy on decision-making capacity due to inherent bias in the health data collected. As we perceive, AI happens to be nonbiased, but it may contain its creator’s biases. Therefore, given the increasing global use of AI in bioethics, where should we draw the line in delegating healthcare decision-making capacity to software and robots?