The Ethics of AI in Marketing: When Personalization Crosses the Line

AnantaSutra Team
December 20, 2025
11 min read

AI makes hyper-personalisation possible, but when does helpful become intrusive? Explore the ethical boundaries of AI-driven marketing for Indian brands.

The Personalisation Paradox

A customer browses running shoes on your website at 10 PM. By 10:05 PM, they see an ad for the exact shoes on Instagram. At 10:30 PM, they receive an email with a discount code. At 11 PM, an AI voice agent calls to ask if they need help completing their purchase. The next morning, their spouse mentions seeing ads for running shoes on their shared tablet.

At what point did helpful personalisation become surveillance?

This is the question that Indian marketers must confront as AI-powered marketing tools become increasingly capable. The technology to track, predict, and influence consumer behaviour has never been more powerful. But capability does not confer permission, and the line between personalisation and manipulation is thinner than most marketers acknowledge.

How AI Enables Hyper-Personalisation

Modern AI marketing tools create detailed individual profiles by combining multiple data streams:

  • Behavioural data: Website visits, click patterns, scroll depth, time on page, search queries, and purchase history
  • Demographic data: Age, gender, location, income bracket, and family status
  • Psychographic data: Interests, values, lifestyle preferences, and personality traits inferred from digital behaviour
  • Contextual data: Device type, time of day, weather, current events, and physical location
  • Social data: Social media activity, connections, group memberships, and content engagement
  • Cross-device data: Linking behaviour across phone, laptop, tablet, and smart TV to create a unified profile

AI systems then use these profiles to predict what each individual will buy, when they will buy it, what message will persuade them, and through which channel. The accuracy of these predictions is remarkable and, depending on your perspective, remarkable can be either impressive or unsettling.

Where the Ethical Lines Are

1. Consent Versus Resignation

When a user clicks "Accept All Cookies" on a banner they barely read, have they genuinely consented to having their behaviour tracked across the internet? Technically, under most current laws, yes. Ethically, the answer is far less clear.

True consent is informed, specific, and freely given. The reality of digital consent is that most people accept tracking because refusing it makes websites unusable, not because they understand or agree with what they are consenting to. Ethical AI marketing requires honest consent mechanisms that genuinely inform users, not dark patterns designed to maximise acceptance rates.

2. Personalisation Versus Manipulation

There is a meaningful difference between showing a customer products they are likely to want and exploiting psychological vulnerabilities to drive purchases they would not otherwise make.

Consider these scenarios:

ScenarioPersonalisationManipulation
Product recommendationSuggesting products based on stated preferencesExploiting impulse buying tendencies identified by AI
Dynamic pricingOffering loyalty discounts to returning customersCharging higher prices to users identified as less price-sensitive
Urgency messagingInforming a customer that a sale ends tomorrowShowing fake countdown timers or fabricated scarcity
RetargetingOne reminder about an abandoned cartRelentless cross-platform pursuit for weeks after a casual browse
Emotional targetingSending cheerful content during festivalsTargeting ads for comfort food to users showing signs of depression

The second column in each row crosses an ethical line. It uses AI's analytical power not to serve the customer but to exploit them.

3. Transparency Versus Opacity

Most consumers do not understand how AI marketing works. They do not know that the email they received was triggered by an algorithm that predicted their likelihood of purchase based on their browsing behaviour. They do not know that the price they see might differ from what another customer sees. They do not know that the "Customers also bought" section is a prediction engine optimised for conversion, not a reflection of genuine customer behaviour.

Ethical AI marketing requires transparency. Not necessarily exposing every algorithm, but being honest about the fact that AI is being used, what data it processes, and how decisions are made.

4. Inclusion Versus Discrimination

AI models trained on historical data can perpetuate and amplify existing biases. A marketing AI that learns from past customer data might systematically exclude certain demographics from premium product recommendations, show different credit offers based on postal codes that correlate with caste or religion, or target high-interest financial products at economically vulnerable populations.

In India, where socioeconomic stratification intersects with caste, religion, language, and geography, the risk of AI-driven marketing discrimination is particularly acute. Ethical AI marketing requires regular bias audits of marketing algorithms and their outcomes.

5. Children and Vulnerable Populations

The DPDPA explicitly prohibits behavioural monitoring and targeted advertising directed at children under 18. But the ethical obligation extends beyond legal compliance. AI marketing systems should include safeguards against targeting vulnerable populations, including the elderly, people with addictive tendencies, and individuals experiencing financial distress.

The Business Case for Ethical AI Marketing

Ethics is not the enemy of profit. In fact, the opposite is increasingly true:

Trust drives lifetime value. A 2025 Edelman Trust Barometer study found that 81% of Indian consumers say trust in a brand is a deciding factor in their purchase decisions. Brands that are perceived as invasive or manipulative lose customers permanently.

Regulation is tightening. The DPDPA is just the beginning. As AI capabilities grow, so will regulatory scrutiny. Businesses that build ethical practices now will not need to scramble when stricter regulations arrive.

Backlash is viral. A single screenshot of a creepy ad experience, shared on social media, can damage a brand far more than the ad could ever have helped it. In the age of social media, ethical lapses are amplified instantaneously.

Ethical brands attract talent. Skilled marketers and technologists increasingly prefer to work for companies whose practices they can be proud of. Ethical AI marketing is a recruitment advantage.

A Framework for Ethical AI Marketing

The CARE Principles

Consent: Obtain genuine, informed consent. Make opt-out as easy as opt-in. Respect preferences across all channels.

Accountability: Assign responsibility for ethical AI marketing to a specific person or team. Conduct regular audits. Accept responsibility when things go wrong.

Respect: Treat customer data as borrowed, not owned. Use it to serve the customer's interests, not just your own. Set limits on personalisation intensity.

Equity: Ensure AI marketing does not discriminate. Audit algorithms for bias. Ensure all customer segments receive fair treatment.

Practical Guidelines for Indian Marketing Teams

  • Limit retargeting frequency: Set a maximum number of retargeting impressions per user per week. Three to five is reasonable. Thirty is harassment.
  • Avoid emotional exploitation: Do not target users based on inferred emotional states such as loneliness, grief, anxiety, or financial stress.
  • Be transparent about AI: When a chatbot is an AI, say so. When recommendations are AI-generated, disclose it. When pricing is dynamic, be honest about it.
  • Respect data deletion requests promptly: When a customer asks to be forgotten, comply fully and quickly. Do not drag the process out hoping they will change their mind.
  • Audit for bias quarterly: Review your AI marketing outcomes by demographic segment. If certain groups are systematically receiving different treatment, investigate and correct.
  • Create an ethics review process: Before launching any AI marketing campaign that uses new data sources, new targeting criteria, or new personalisation techniques, run it through an ethics review.

Case Study: When Personalisation Gets It Right

An Indian direct-to-consumer wellness brand implemented AI-driven personalisation with explicit ethical guardrails. Their approach:

  • Customers explicitly chose their communication preferences during onboarding, including frequency, channels, and content types
  • AI recommendations were based only on data the customer knowingly provided, not inferred psychological profiles
  • Retargeting was limited to three impressions per week across all platforms
  • The brand published a quarterly transparency report showing how customer data was used

Results after six months: customer trust scores increased by 34%, email open rates rose by 22%, and customer lifetime value increased by 18%, compared to the industry average. Ethical personalisation did not reduce effectiveness; it enhanced it.

The Responsibility of AI Builders

The responsibility for ethical AI marketing does not rest solely with marketers. It extends to the companies that build AI marketing tools. Platforms that make it easy to engage in manipulative practices, whether through unrestricted retargeting, emotional profiling, or opaque algorithmic pricing, share responsibility for how their tools are used.

At AnantaSutra, we build AI marketing and voice agent platforms with ethical guardrails embedded in the product. Our tools include configurable frequency caps, transparent AI disclosure features, bias monitoring dashboards, and DPDPA-compliant consent management. We believe that the most effective marketing is the kind that customers welcome, not the kind they endure.

Share this article