This blog is presented as a part of C# Advent 2025. Follow the link to check out the rest of the excellent C# and .NET content coming out in 2 blogs per day between December 1 and 25.
Sentiment analysis has been a cornerstone of Natural Language Processing (NLP) for years, helping businesses understand customer feedback, employee satisfaction, and market trends. As a C# developer, you’ve had access to powerful sentiment analysis tools through Azure AI services. But with the rise of Large Language Models (LLMs), the landscape is shifting.
In this post, we’ll explore both approaches using the example of an employee feedback management platform.
The Traditional Approach: Azure AI Language Service
Microsoft’s Azure AI Language service has been the go-to solution for sentiment analysis in the .NET ecosystem. It provides:
- Sentiment labels: positive, negative, neutral, and mixed
- Confidence scores: Values between 0-1 for each sentiment category
- Opinion Mining: Aspect-based sentiment analysis that links sentiments to specific targets (nouns) and assessments (adjectives)
Implementation with Azure.AI.TextAnalytics
Here’s how you implement traditional sentiment analysis using the Azure.AI.TextAnalytics NuGet package
using Azure;
using Azure.AI.TextAnalytics;
public class SentimentAnalyzer : ISentimentAnalyzer
{
private readonly TextAnalyticsClient _client;
public SentimentAnalyzer(AzureLanguageOptions options)
{
_client = new TextAnalyticsClient(
new Uri(options.Endpoint),
new AzureKeyCredential(options.ApiKey)
);
}
public async Task<SentimentAnalysisResult> AnalyzeTranscriptAsync(
string transcript,
CancellationToken cancellationToken = default)
{
// Enable Opinion Mining for aspect-based sentiment
var options = new AnalyzeSentimentOptions
{
IncludeOpinionMining = true
};
var response = await _client.AnalyzeSentimentAsync(
transcript,
options: options,
cancellationToken: cancellationToken
);
DocumentSentiment documentSentiment = response.Value;
// Extract overall sentiment with confidence scores
var overallSentiment = new OverallSentiment
{
Sentiment = MapSentiment(documentSentiment.Sentiment),
PositiveConfidence = documentSentiment.ConfidenceScores.Positive,
NegativeConfidence = documentSentiment.ConfidenceScores.Negative,
NeutralConfidence = documentSentiment.ConfidenceScores.Neutral
};
// Extract topic-level sentiments from opinion mining
var topics = ExtractTopicSentiments(documentSentiment);
return new SentimentAnalysisResult
{
OverallSentiment = overallSentiment,
Topics = topics,
TranscriptText = transcript
};
}
private static List<TopicSentiment> ExtractTopicSentiments(
DocumentSentiment documentSentiment)
{
var topics = new List<TopicSentiment>();
foreach (var sentence in documentSentiment.Sentences)
{
foreach (var opinion in sentence.Opinions)
{
var target = opinion.Target;
topics.Add(new TopicSentiment
{
Topic = target.Text,
Sentiment = MapSentiment(target.Sentiment),
PositiveConfidence = target.ConfidenceScores.Positive,
NegativeConfidence = target.ConfidenceScores.Negative,
TextContext = sentence.Text
});
}
}
return topics;
}
}
This approach works well for straightforward text analysis, as documented in the Azure AI Language quickstart
However, when dealing with complex real-world scenarios like employee feedback transcripts, traditional methods reveal their limitations:
- Sentiment Dilution: Multi-speaker conversations mix interviewer questions with employee responses, diluting the actual sentiment being measured.
- Topic Contamination: Agent prompts like “How is your manager?” can skew topic extraction.
- Context Loss: Short answers like “Not good” lose meaning without the question context.
- Figurative Language: Sarcasm, implied meaning, and nuanced expressions are often misclassified.
As Classification of natural language text with generative AI in Microsoft Fabric notes:
“Traditional methods such as rule-based chunking and sentiment analysis often miss the nuances of language, such as figurative speech and implied meaning.”
The LLM Revolution: Azure OpenAI
This is where Large Language Models change the game. According to Microsoft’s Classification of natural language text with generative AI in Microsoft Fabric :
“Generative AI and Large Language Models (LLMs) change this dynamic by enabling large-scale, sophisticated interpretation of text. They can capture figurative language, implications, connotations, and creative expressions, leading to deeper insights and more consistent classification across large volumes of text.”
Implementation with Azure OpenAI
Here’s the LLM-based approach using Azure OpenAI with structured outputs:
using Azure;
using Azure.AI.OpenAI;
using Microsoft.Extensions.AI;
using OpenAI.Chat;
public class LLMSentimentAnalyzer : ISentimentAnalyzer
{
private readonly ChatClient _chatClient;
public LLMSentimentAnalyzer(AzureOpenAIOptions options)
{
var azureClient = new AzureOpenAIClient(
new Uri(options.Endpoint),
new AzureKeyCredential(options.ApiKey)
);
_chatClient = azureClient.GetChatClient(options.DeploymentName);
}
public async Task<SentimentAnalysisResult> AnalyzeTranscriptAsync(
string transcript,
CancellationToken cancellationToken = default)
{
var systemPrompt = """
You are a sentiment analysis expert. Analyze the employee feedback transcript
and extract sentiment information.
Rules:
1. Analyze the EMPLOYEE's sentiment only (ignore interviewer/agent)
2. Overall sentiment can be Positive, Negative, Neutral, or Mixed
3. Extract key topics mentioned: supervisor, commute, compensation, culture, etc.
4. PAY SPECIAL ATTENTION to mental health indicators: stress, burnout,
work-life balance, anxiety, feeling overwhelmed
5. Confidence scores must sum to 1.0
6. textContext should be a direct quote from the transcript
""";
// Generate JSON schema for structured outputs
var schema = AIJsonUtilities.CreateJsonSchema(typeof(LLMSentimentResponse));
var options = new ChatCompletionOptions
{
Temperature = 0f,
ResponseFormat = ChatResponseFormat.CreateJsonSchemaFormat(
"sentiment_analysis_result",
BinaryData.FromString(schema.ToString())
)
};
var completion = await _chatClient.CompleteChatAsync(
[
new SystemChatMessage(systemPrompt),
new UserChatMessage(transcript)
],
options,
cancellationToken
);
var jsonResponse = completion.Value.Content[0].Text;
return ParseLLMResponse(jsonResponse, transcript);
}
}
The Structured Output Advantage
A key feature enabling reliable LLM-based sentiment analysis is structured outputs:
“Structured outputs make a model follow a JSON Schema definition that you provide as part of your inference API call… Structured outputs are recommended for function calling, extracting structured data, and building complex multi-step workflows.”
This ensures the LLM returns data in exactly the format your application expects:
private class LLMSentimentResponse
{
public required LLMOverallSentiment OverallSentiment { get; set; }
public required List<LLMTopicSentiment> Topics { get; set; }
}
private class LLMOverallSentiment
{
public required string Sentiment { get; set; }
public required double PositiveConfidence { get; set; }
public required double NegativeConfidence { get; set; }
public required double NeutralConfidence { get; set; }
}
Hybrid Approach: LLM Preprocessing + Traditional Analysis
For scenarios where you want the best of both worlds, you can use LLMs to preprocess text before sending it to Azure AI Language:
public class LlmTranscriptPreprocessor : ITranscriptPreprocessor
{
private readonly ChatClient _chatClient;
public async Task<string> PreprocessAsync(
string multiSpeakerTranscript,
CancellationToken cancellationToken = default)
{
var systemPrompt = """
Transform this multi-speaker employee feedback transcript into a
first-person narrative from the employee's perspective only.
Rules:
1. Include ONLY the employee's statements
2. Remove all interviewer questions
3. Resolve pronouns using context (e.g., "she" -> "my manager")
4. Expand short answers using question context
(Q: "How was compensation?" A: "Not good" ->
"I was not satisfied with the compensation")
5. Maintain original sentiment and meaning
""";
var options = new ChatCompletionOptions { Temperature = 0f };
var completion = await _chatClient.CompleteChatAsync(
[
new SystemChatMessage(systemPrompt),
new UserChatMessage(multiSpeakerTranscript)
],
options,
cancellationToken
);
return completion.Value.Content[0].Text ?? string.Empty;
}
}
Comparison: When to Use Each Approach
| Aspect | Azure AI Language | Azure OpenAI LLM |
| Setup Complexity | Low – API key and endpoint | Medium – Deployment needed |
| Cost | Per-document pricing | Per-token pricing |
| Customization | Limited to model versions | Fully customizable via prompts |
| Multi-speaker text | Struggles with mixed speakers | Excels with proper prompting |
| Figurative language | Often misclassified | Better understanding |
| Domain specificity | Generic model | Custom prompts for your domain |
| Latency | ~100-200ms | ~500ms-2s |
| Structured output | Fixed schema | Flexible JSON schema |
Design Pattern: Strategy Pattern for Flexibility
Notice how both implementations share the same interface:
public interface ISentimentAnalyzer
{
Task<SentimentAnalysisResult> AnalyzeTranscriptAsync(
string transcript,
CancellationToken cancellationToken = default);
}
This allows you to swap implementations based on your needs:
// In your DI configuration
services.AddScoped<ISentimentAnalyzer>(sp =>
{
var config = sp.GetRequiredService<IConfiguration>();
var useLlm = config.GetValue<bool>("UseLlmSentimentAnalysis");
return useLlm
? new LLMSentimentAnalyzer(azureOpenAIOptions)
: new SentimentAnalyzer(azureLanguageOptions);
});
Conclusion
The shift from traditional NLP services to LLMs isn’t about one being “better” than the other—it’s about choosing the right tool for your specific use case:
- Use Azure AI Language when you need quick, cost-effective sentiment analysis for simple text
- Use Azure OpenAI when dealing with complex, nuanced text that requires domain-specific understanding
- Use a hybrid approach when you want traditional service benefits with LLM preprocessing
As Natural language processing technology states:
“Language models enhance natural language processing by providing advanced text generation and understanding capabilities… They serve as powerful tools within the broader natural language processing domain by enabling more sophisticated language processing.”
The code examples in this post illustrate how a real employee feedback management system can benefit from LLM-based analysis to significantly improved the accuracy of employee sentiment detection, particularly for sensitive topics like mental health and work-life balance.
If you’d like help integrating sentiment analysis or other AI-driven technologies into your .NET applications, reach out to the experts at Trailhead.
References
Azure AI Language – Sentiment Analysis Overview
Azure AI Language – Quickstart C#
Azure OpenAI Structured Outputs
Text Classification with Generative AI
Natural Language Processing Technology Choices
How Generative AI and LLMs Work
Azure.AI.TextAnalytics NuGet Package

