Building AI Features in Laravel: Patterns That Scale
Everyone's adding AI to their Laravel apps. Most are doing it wrong.
Everyone's adding AI to their Laravel apps. Most are doing it wrong.
I've spent the last year integrating LLMs into production Laravel applications. Along the way, I've made mistakes, refactored entire systems, and finally landed on patterns that actually work at scale.
Here's what I've learned.
The naive approach (and why it fails)
When you first add AI to your app, it usually looks something like this:
// In a controller somewhere
$response = Http::post('https://api.openai.com/v1/chat/completions', [
'model' => 'gpt-4',
'messages' => [
['role' => 'user', 'content' => $request->input('prompt')]
]
]);
return $response->json()['choices'][0]['message']['content'];
It works. Ship it.
Then reality hits:
- You need to switch providers (OpenAI is down, or Anthropic is better for your use case)
- Prompts are scattered across controllers
- You can't test anything without hitting the API
- Costs spiral because you have no visibility
- Error handling is an afterthought
Sound familiar?
Pattern 1: Abstract the provider
Your application shouldn't know or care which LLM it's talking to. This sounds obvious, but I've seen codebases where "gpt-4" is hardcoded in 47 different places.
// config/ai.php
return [
'default' => env('AI_PROVIDER', 'openai'),
'providers' => [
'openai' => [
'driver' => 'openai',
'model' => env('OPENAI_MODEL', 'gpt-4-turbo'),
'api_key' => env('OPENAI_API_KEY'),
],
'anthropic' => [
'driver' => 'anthropic',
'model' => env('ANTHROPIC_MODEL', 'claude-3-sonnet'),
'api_key' => env('ANTHROPIC_API_KEY'),
],
],
];
Use a package like Prism to handle the abstraction. It gives you a unified interface across providers:
use EchoLabs\Prism\Prism;
$response = Prism::text()
->using('anthropic', 'claude-3-sonnet')
->withPrompt('Summarize this article')
->generate();
Switch providers by changing one line. Your business logic stays the same.
Pattern 2: Prompts are code, treat them like it
Prompts shouldn't live in controllers. They shouldn't be inline strings. They're a critical part of your application logic.
I create dedicated prompt classes:
namespace App\AI\Prompts;
class SummarizeArticlePrompt
{
public function __construct(
private string $article,
private string $tone = 'professional',
private int $maxWords = 150,
) {}
public function system(): string
{
return <<<PROMPT
You are a content summarizer. Create concise, {$this->tone} summaries.
Focus on key insights and actionable takeaways.
Never exceed {$this->maxWords} words.
PROMPT;
}
public function user(): string
{
return "Summarize the following article:\n\n{$this->article}";
}
}
Benefits:
- Version controlled with your code
- Type-hinted parameters
- Testable in isolation
- Easy to iterate and A/B test
Pattern 3: AI actions, not AI controllers
Following the Laravel Actions pattern, I create dedicated action classes for AI operations:
namespace App\AI\Actions;
class SummarizeArticle
{
public function __construct(
private AIService $ai,
private SummaryRepository $summaries,
) {}
public function execute(Article $article, array $options = []): Summary
{
$prompt = new SummarizeArticlePrompt(
article: $article->content,
tone: $options['tone'] ?? 'professional',
);
$response = $this->ai->generate($prompt);
return $this->summaries->create([
'article_id' => $article->id,
'content' => $response->text,
'tokens_used' => $response->usage->total,
'model' => $response->model,
]);
}
}
Now you have:
- Single responsibility
- Dependency injection
- Easy testing with mocks
- Reusable across controllers, commands, jobs
Pattern 4: Always go async
AI calls are slow. 500ms on a good day, 5+ seconds when things get rough.
Never block your web requests:
// In your controller
SummarizeArticleJob::dispatch($article);
return response()->json(['status' => 'processing']);
// The job
class SummarizeArticleJob implements ShouldQueue
{
use Queueable;
public function __construct(
public Article $article,
) {}
public function handle(SummarizeArticle $action): void
{
$summary = $action->execute($this->article);
// Broadcast to frontend via websockets
ArticleSummarized::dispatch($this->article, $summary);
}
public function failed(Throwable $e): void
{
// Notify, retry logic, fallbacks
}
}
Use Laravel Reverb or Pusher to push results to the frontend in real-time.
Pattern 5: Log everything
You can't optimize what you can't measure.
class AIService
{
public function generate(Prompt $prompt): AIResponse
{
$startTime = microtime(true);
$response = $this->client->generate($prompt);
AILog::create([
'prompt_class' => get_class($prompt),
'provider' => $this->provider,
'model' => $this->model,
'input_tokens' => $response->usage->input,
'output_tokens' => $response->usage->output,
'latency_ms' => (microtime(true) - $startTime) * 1000,
'cost' => $this->calculateCost($response->usage),
]);
return $response;
}
}
Now you can:
- Track costs per feature
- Identify slow prompts
- Compare model performance
- Spot anomalies before they become problems
Pattern 6: Design for failure
AI services fail. A lot. Rate limits, timeouts, content filters, random 500s.
Build resilience from day one:
class ResilientAIService
{
public function generate(Prompt $prompt): AIResponse
{
return retry(
times: 3,
callback: fn () => $this->client->generate($prompt),
sleepMilliseconds: fn ($attempt) => $attempt * 1000,
when: fn ($e) => $this->isRetryable($e),
);
}
private function isRetryable(Throwable $e): bool
{
return $e instanceof RateLimitException
|| $e instanceof TimeoutException;
}
}
Consider fallback providers:
public function generate(Prompt $prompt): AIResponse
{
try {
return $this->primary->generate($prompt);
} catch (ProviderUnavailableException $e) {
report($e);
return $this->fallback->generate($prompt);
}
}
The bigger picture
These patterns aren't about making your code pretty. They're about building AI features that:
- Scale — Handle thousands of requests without melting
- Adapt — Switch providers, models, or approaches without rewrites
- Survive — Keep working when things go wrong
- Evolve — Easy to improve as AI capabilities change
The AI landscape moves fast. Your architecture should make it easy to keep up.
What's next?
I'm working on a follow-up post about AI agents in Laravel — when simple prompts aren't enough and you need autonomous, multi-step reasoning.
In the meantime, check out:
- Prism — Unified LLM interface for Laravel
- Atlas — AI agent orchestration
- LarAgent — Production-ready AI agents
What patterns have you found useful? Hit me up on X or LinkedIn.
Building something with AI + Laravel? I'd love to hear about it.
I offer hands-on consulting to help you resolve technical challenges and improve your CMS implementations.
Get in touch if you'd like support diagnosing or upgrading your setup with confidence.
