I am not sure about the merits of the “research” required for “sampling” and basing off the “recommendations” but I can foresee when I’d be forced to deliberate to choose from AI based recommendations for a surgical approach versus using my own clinical judgement. This is a peculiar situation for most of us. The linked article (on HBR) provides enough ideas.
For instance, companies like Netflix and YouTube could emphasize AI-based recommendations when utilitarian attributes are relatively more important to people (e.g., when they are selecting a documentary to view) and human-based recommendations (“similar users”) when hedonic attributes are relatively more important (e.g., when selecting a horror movie to view). Similarly, a company in the hospitality industry such as TripAdvisor could emphasize AI-based recommendations for business travel services and de-emphasize AI-based recommendations for leisure travel services.
These contexts serve the idea for “marketing” but we have seen enough instances where major hospital chains have been advocating the AI based decision-making spree for the patients.
AI can perhaps do no wrong. That’s a “hedonistic statement” indeed!
As firms navigate the challenges of attracting and retaining customers in a crowded digital marketplace, those with a good understanding of the conditions under which consumers do and do not trust the “word” of AI recommenders will have a competitive advantage.
I am less inclined to take away the teachings from the paper. However, it is essential to monitor this emerging marketing bullshit from the hospitals looking at drawing the “paying customers”.