Swipe left on AI products with poor privacy policies, too much hype with no substance, lack of transparency, and overused features like generic chatbots. Red-flags include unrealistic claims and poor user experience.
@jackkdavies26 To really work, transparency and trust have to be core values at any tech company. It is a little concerning that Microsoft laid off a large part of the well-regarded AI Ethics team, at the same time as they are investing on GTP.
@kwaku_amprako If feel like some companies just don't have the time or budget to do any user testing. You have to get feedback from potential customers early in your design process before you commit too many resources and build the wrong thing.
ANY products or companies lacking any clear values-based motivation or guardrails. This includes around trust and safety as well as related to bias and bigotry... not specific to AI, but seems with the mad dash to get on this 10x moment these are afterthoughts instead of by-design.
One potential red flag could be if the AI product doesn’t provide clear or transparent explanations of how it works. If the product is making decisions or recommendations, I may want to understand the logic and data behind those decisions. Lack of transparency could erode trust in the product.
Another potential issue could be if the AI product has biases or makes unfair or discriminatory decisions. As AI is only as good as the data it is trained on, biased data can result in biased algorithms. Users may be wary of using products that have a history of biased decisions.
Conversa - Videos That Talk back