I have been noticing bias creeping into ML pipelines, especially in classification systems where training data is skewed. The issue is not always obvious at inference time, but metrics like disparate impact ratio can expose it. People assume models are neutral, but feature engineering and data sourcing introduce bias vectors.