AI vs Human Garden Design — Where Things Stand in 2025
- Urban Plot
- Jul 23
- 7 min read
Updated: Jul 24
As AI-generated design tools make their way into every corner of the creative industry, garden design is no exception. Apps now offer planting suggestions in seconds, visualise outdoor spaces in 3D, and claim to deliver full garden plans at the tap of a button.
So where does that leave human garden designers? And if you’re planning to redesign your garden, which route should you trust?
Here’s a realistic look at how AI garden design compares to professional human-led design today—and what you should consider if you’re about to transform your outdoor space.
Planting Plans
AI can generate a planting list based on conditions like sun exposure, soil type, and location. Some tools even reference extensive horticultural databases (including RHS data) to recommend plants that technically “fit.”
But in practice, planting is never that simple. The art of how to balance shape and form, structure, patterns and colour is nuanced and subjective. It comes from experience and is quite difficult to articulate, which makes the information AI is learning from incomplete or at best inconsistent.
AI Vs. Human Test: Planting Plans
The prompt I used in ChatGPT: "Create a detailed 4m x 2m planting plan design with an image.
The plan should:
Be designed for full sun
Include a mix of evergreen and deciduous plants
Provide year-round interest (flowers, foliage, structure)
Incorporate pollinator-friendly species where possible
Specify planting positions in a simple layout grid"
AI Output

AI Summary
Here's the main issue, AI can't draw 2D Plans like a garden designer or architect does, it can only draw a picture of a plan i.e. it will look at what a plan drawing looks like and then create an impression of that rather the drawing out and measuring each line individually.
With this the planting plan on first glance looks fine, but dig deeper and there's major issues both in terms of the accuracy of the plant labelling and the general balance and aesthetic of the border:
Only one grass on the far side with no balance or repetition
Verbena bonariensis in a single clump
Little repeat planting for cohesion and continuity
Little thought about companion planting to bring out the best in the plants selected
Far too much Echinacea visually, it would over power the design
Inconsistent naming and visuals - Nepeta is represented as both a red flower (presume this is meant to be the Achillea) and a purple flower identical to the Thyme
Human Output


1.Miscanthus sinensis ‘Morning Light’, 2.Pittosporum 'Golf Ball', 3.Potentilla fruticosa 'Primrose Beauty', 4.Stipa tenuissima, 5.Agapanthus 'Northern Star', 6.Geranium 'Mavis Simpson', 7.Geum 'Totally Tangerine', 8.Salvia nemorosa ‘Caradonna’, 9.Silene coronaria, 10.Verbena bonariensis
Human Summary:
We are able to balance planting across the space, using our experience to rhythm and repetition balanced through the space (well we think so anyway):
Key plants reused and balanced across the border
Different shades of purple and pink and pops of orange create cohesive but impactful colour
Variations in leaf shapes and colour, not considered by AI
Structure provided by Pittosporum 'Golf Ball' to offset softer grasses and
Companion plants in Saliva Caradonna and Geum Totally Tangerine paired together
All plants named accurately (hopefully)
Verdict:
AI-generated planting plans are still very limited. It can help with suggestions for plants and combinations as lists but the creation of a cohesive layout is still some way off. I'm fairly confident if I attemtped to implement this plan it would be a. difficult and b. would be a waste of money as I would be dissatisfied with the end result.
Winner: HUMAN - a professional planting designer is still essential so you don't waste money on the wrong plants
3D Garden Design Visuals
AI can now produce 3D garden visualisations based on prompts, sketches or even rough outlines. Some platforms use generative models to turn flat plans into glossy renders—complete with shadows, perspective and planting.
Sounds impressive—and it is. But just like with planting plans, the details matter. While AI is good at mimicking the look of a garden, it doesn’t understand the logic behind it. There’s a big difference between producing a photo-realistic image, and creating a design that is buildable, balanced, and tailored to real world constraints (like budget, levels, or sightlines).
AI Vs. Human Test: 3D Visualisation
The prompt I used in ChatGPT:
A bit too long to put the entire prompt here, but I basically shared the entire customer brief we collected through our details fact find. It outlines all of the garden conditions, orientation, soil, customer needs and wants, colours, style, plant preferences as well as anything they don't like or want.
I also asked for the design to be implemented into this garden image.

AI Output

AI Summary
At first glance, the AI image looks slick— paths, lush planting, curves that fit with the client's brief. But again, look closer and the cracks appear - the most glaring of which is that the output bears resemblance to the garden in the image uploaded. So if your were doing this at home you would be non-the-wiser in terms of what you were actually going to do.
The key issue? AI-generated 3D designs are surface-level. They know what a nice garden looks like in theory, but don’t understand how it actually functions. It also currently struggled to create a plan that fits into the image of the garden uploaded into Google.
Here’s what was missing:
No solutions for specific issues highlighted in the brief regarding the tree stumps at the back of the garden or manhole cover in lawn.
Very typical and traditional output with lawn in the middle and plants around the edge,
Not clear where the path leads - there's a fence here
Lots of elements from the brief missing such as wild flower meadow and how to deal with view from window seat
No consideration for light inside the house due to pergola position
Planting looks lovely albeit a bit garish, but species are undetermined and difficult to identify. Most are pretty shapes rather than specific species.
Turning this into a plan that is easily implemented will be difficult (see above) as a lot more information is needed that AI struggles to provided
The end result looks like a garden and could serve you well as guide for you to start from. This is a different purpose from a professional garden designer though who work with you and your space and tailor specific solutions to your conditions based on experience and their design eye.
Human Output
Our approach to 3D visualisation is rooted in understanding how people use space, not just how it looks. Here’s what we created:


Accurate 3D representation of the space from lots of different angles (we provided 8 3D renders in this case showing views from different spots
Inclusion of all elements in the brief including wildflower meadow and how to deal with back border
moved patio further into the garden to make better use of the space available
Mix of contrast materials to clearly zone paths and patio areas
Addition of wildlife features such as water, not specifically mentioned in the brief but implied through a need to
Addition of 2x new benches to capture new views of the garden
Added 3x small trees to break the space up, frame views and provide intimacy
Nuanced planting and plant selections to fit with requests in the brief
Human Summary:
The key difference is that we’re designing with intent. Every line, material and planting choice is rooted in both aesthetics and functionality and tied directly back to the client's brief and garden conditions. It’s not just a nice image—it’s a space that works and has all the detail needed to directly build.
AI Vs Human 3D Garden Visuals Verdict:
Winner: HUMAN - AI-generated 3D visuals can inspire, but they can’t design. The image might look good on face value, but turning the design into a real garden still requires a lot of creativity and thought from the user. For now, it’s still humans who understand the space between the pixels and what a garden designer does vs. AI is currently not comparable.
AI Vs. Human Test: 2D Plans
The prompt I used in ChatGPT:
I simply asked ChatGPT to "create a 2D plan drawing of the 3d design just created". I did this with memory on and as part of the same conversation.
AI output

AI Summary
Again at first glance it appears to have created a clear plan you can go someway to implementing, but look closer and there are some major issues:
First of all the output bears no resemblance to the 3D render and no resemblance to the customer's plot
The measurements are not to scale - like with the planting plan they are a drawing of measurement rather than architecturally drawn plans with a clear scale
As with the other elements of the design process AI tools fall short. A garden design doesn’t stop at a visual—it needs to be built. That means 2D CAD drawings that specify levels, materials, foundations, drainage, and precise measurements. AI currently can't produce detailed, site-specific, construction-ready drawings without human input.
Human Output
A snapshot of some of the outputs




Human Summary
Lots of detail ready to hand to a landscaper including measurements, planting plants, notes on the theory and logic of design choices and materials used across the entire site.
Verdict:
Winner: HUMAN - AI can't draw scale 2D plans, just picture of what these could look like. A human is still essential
Looking Ahead
In the next 6–12 months, we expect AI tools to get better at contextual planting suggestions, material comparisons, and layout generation. We’ll be tracking these developments closely to see how AI's capabilities evolve and keeping our clients informed about what’s genuinely useful—and what’s still marketing hype.
If you’re planning a garden project and want advice grounded in both experience and innovation, get in touch with Urban Plot.


