The Dublin rental market has a reply problem that compounds in both directions. Renters send the same generic message to 20-plus properties and hear back from maybe 2 or 3. Academic research on UK and Irish rental markets puts the cold inquiry response rate at somewhere between 12% and 20%, depending on the area and season. Caspar Bannink, founder of HomeScout.io, confirmed that range through his own user interviews.
His read: it’s a signal quality problem wrapped in a UX problem. And it’s exactly the kind of gap you can build against.
Why Landlords Never Reply
When a listing goes live on Daft.ie, the dominant Irish rental portal, it typically receives 50 to 150 inquiries in the first 48 hours. Landlord-side tooling is essentially nonexistent: no status tags, no inbox filtering, no reply templates. Just email forwarding.
The landlord skims a subset, replies to whoever seems credible and convenient, and moves on. The inquiry sitting at position 78 could be from the perfect tenant. It probably never gets opened.

On the applicant side, the constraint is symmetric. Renters know supply is tight and competition is high, so rational strategy is to apply wide. Landlords recognize the blast-email pattern and deprioritize it. Both sides end up producing low-signal behavior, and the market equilibrium stays broken.
The Classification Problem Generic Emails Fail
From a signal processing standpoint, a landlord reading 150 emails is running a classification task: does this person actually want this property, or are they spamming everything? Generic messages fail that classifier reliably.
What passes it, according to Bannink:
- References something specific from the listing: the garden, the commute distance to a named workplace, the pet policy
- Answers the implicit screening questions: move-in date, preferred lease length, employment type
- Has a coherent reason for wanting that specific location
Writing that version of an inquiry for 20 properties manually is O(n) effort. Most people won’t do it. That’s the gap.
️ What HomeScout Built
HomeScout’s AI email composer generates property-specific inquiry drafts. The flow is straightforward:
- User finds a listing they want to apply to
- They click Generate inquiry
- The system reads the listing data: title, description, location, price, and any landlord-specified requirements
- It generates a draft that references specifics from the listing and includes answers to standard screening questions
- User reviews and edits before sending
The screening question answers don’t get generated per inquiry. They pull from a saved user profile, so the answers are consistent and accurate across every application.
What They Got Wrong in Early Versions
Bannink is candid about the failure modes they hit before the feature worked reliably.
Too formal. First drafts read like legal correspondence. Real inquiry emails are conversational. Renters don’t write “I would like to express my interest in the aforementioned property.” They write something closer to “I’d love to come view this, I work nearby and the commute would be ideal.”
Hallucinated specifics. Early versions invented details not present in the listing. Referencing a south-facing garden when the listing never mentioned orientation is a problem when the landlord reads it and knows it’s wrong. The fix was a strict constraint: only reference what’s explicitly in the listing data.
Missing implicit screening questions. The questions landlords actually care about aren’t always stated in the listing. The user profile layer solved this by letting those answers come from saved renter preferences rather than being generated fresh each time.

Model Choice and Prompt Engineering
Bannink uses GPT-4o for this feature. He tested smaller models and found the failure modes were too frequent: hallucinated specifics, wrong tone, context dropped from the listing. On this specific task, the quality delta between models matters more than the cost delta, because the output is user-facing in a context with real stakes.
Fine-tuning on successful inquiry emails is on the roadmap, but the dataset isn’t large enough yet to make it worthwhile.
The prompt structure that works:
- System prompt establishes persona (renter, not an AI assistant) and tone (conversational, specific, not corporate)
- Listing data passed as structured context, not a raw HTML scrape
- User profile answers included as a short bullet list for the model to draw from
- Explicit negative instructions: don’t fabricate specifics, don’t use formal register, don’t mention this is AI-generated
- Output format: plain text, no subject line, 3 to 4 short paragraphs
His observation on the negative instructions: they matter more than the positive ones. The default model behavior drifts toward formality and generic language. You have to actively constrain it away from those patterns.
The Catch: It’s a Half Solution
Bannink is upfront that the email composer is a partial fix. The underlying problem is that landlord inboxes have no structure, so even a well-crafted inquiry can get buried. The complete solution means building on the landlord side too: structured applicant pipelines, automated first-response, viewing schedulers. That’s a different product scope than what HomeScout is tackling right now.
For now, improving inquiry quality is the lever available on the renter side. Users who use the composer report better outcomes than users who don’t.
If you’re dealing with a similar response-rate problem in a two-sided market, the structural insight here transfers: write for the classifier, not the reader. Specificity is what passes the filter.

