Neon is serverless Postgres. Neon databases elastically scale up and down according to load—including scaling to zero—and database operations are greatly simplified via robust API support and database branching capabilities. But just like every Postgres, to get the most out of Neon’s performance, it’s good to know a few tricks and best practices.
Compute sizing and connections management
Let’s start by quickly covering some Neon resource sizing essentials:
- If your application handles multiple connections, use connection pooling.
- In Neon, the system allocates a certain amount of RAM per connection to ensure stable performance. A general rule of thumb is that Neon allows approximately 100 connections per 1 GB of RAM. You can consider this ratio for planning the scale of your compute resources based on the expected number of concurrent connections.
- The best way to cover this is by taking advantage of Neon’s autoscaling to dynamically adjust compute resources based on actual usage patterns. Autoscaling can help manage unexpected spikes in connections or workload, ensuring that your database remains responsive under varying conditions.
While it might be tempting to minimize costs by keeping compute resources limited, under-provisioning can lead to performance issues, especially during peak loads. If you enable autoscaling, you don’t have to worry about this – we highly recommend it.
Troubleshooting Postgres query performance
As we covered in this Support Recap blog post, fixing query performance is something that comes up frequently among Postgres users. Things were looking fast yesterday but are sluggish today: what’s going on?
Using pg_stat_statements for performance analysis
First, you will want to drill down exactly which queries are problematic. You can use pg_stat_statements
for that:
This extension aggregates statistics such as the number of times a query has been executed, the total time spent on execution, or number of rows returned. You can explore the structure and types of data it collects by running:
Among other things, pg_stat_statements
allows you to identify long-running queries. For example, consider this example:
The output would look similar to this:
This output would help you identify queries that could benefit from optimization. For example, queries with high execution times and a significant number of calls
would be good candidates for review—like this one:
Diving deeper with EXPLAIN ANALYZE
Once you have identified your problematic queries, you can use EXPLAIN (ANALYZE, BUFFERS) on them to dive deeper into why they are slow:
Consider this hypothetical output:
This would tell you a lot. For example: the execution plan begins with a sequential scan (Seq Scan
) of the sales table, followed by sorting (Sort
) the results on customer_id
, and then performing a grouping operation (GroupAggregate
). This indicates that the query scans the entire table to find the relevant rows, which is not efficient – especially if your table is getting large.
Improving query performance in Neon Postgres
Once you have an idea of where things may be wrong, you can try these tactics:
Add new indexes (if you need them)
If we go back to the previous example, given the insights from EXPLAIN ANALYZE, you might decide to add an index on customer_id
, especially if that was a query you run often:
After implementing this index, rerunning EXPLAIN ANALYZE
on the query should show that Postgres now performs an index scan instead of a sequential scan, which should reduce the execution time.
Reduce bloat
After ruling out indexes, the next thing to consider would be table bloat. Managing bloat can be (unfortunately) quite a pain in Postgres, so it really pays off to keep an eye on it. A bloated table not only will be less performant, but it will also need much more storage than a healthy table – therefore inflating your bill.
Let’s go back to our previous EXPLAIN example output. Imagine we already had an index in customer_id; its absence in the plan might indicate that the index is inefficient, possibly due to bloat. A bloated index can lead Postgres to default back to a full table scan.
To fix this, you could consider a strategy with two parts:
- First, you would fix your inefficient index by running the REINDEX command. This command will rebuild your index from scratch. Beware: this is an intensive operation, so plan to do this during a maintenance period.
- Second, you would prevent this from happening again by addressing table bloat regularly. Consider fine-tuning your autovacuum settings, for example adjust parameters such as
autovacuum_vacuum_scale_factor
andautovacuum_vacuum_threshold
. Yon can also schedule regular bloat check-ups as part of your Postgres maintenance routine. For example, the pgstattuple extension provides functions to show table and index bloat levels.
Cache your data
If you optimize indexes and reduce bloat but still observe suboptimal performance, checking cache usage could be a good next step.
Ensuring that data you frequently access is predominantly served from cache can mitigate performance issues not solved by indexing alone. This is especially important in Neon, given its unique architecture that extends Postgres shared buffers with a local file cache. The process of analyzing cache usage therefore looks a bit different in Neon vs in traditional Postgres.
To evaluate caching effectiveness in Neon, you need to look at local file cache hits instead of shared buffer hits. Neon allows you to do this by providing a neon_stat_file_cache
view. To access this view, you need to install the neon extension first:
After your workload has been running and generating data, you can analyze your cache hit ratio by running this command:
Your output might look like this:
If the hit ratio falls below the ideal threshold (99% in Neon), reconsider your compute configuration – e.g. increase your autoscaling limits. Often, the reason behind this issue is inefficient memory allocation, which is a simple thing to fix.
Optimizing ingestion rates in Neon Postgres
Now that we’ve talked about how to diagnose and fix slow queries, it’s time to cover writes. If you care about ingesting as much data as possible into your Postgres database, here are some simple strategies that can get you a long way.
Make sure your client has enough compute
This is a basic, but it happens. At the start of this post, we mentioned how it is important to make sure your Neon instances have enough resources to enable good performance; but to optimize ingests, the resources available on the client side are also important. Ensure that you have enough CPU, memory, and bandwidth in your client to handle the data load efficiently. Also, try hosting your Neon project as close as possible (geographically) to your client.
Use read replicas
This tip could also live in the query performance section since it helps both with writes and reads. If your application deals with heavy writes, heavy read queries, or both, Neon allows you to liberate load in your main compute by offloading read queries to a read-only compute endpoint, which is equivalent to the concept of read replica in other databases.
The main difference between Neon’s implementation and “regular” read replicas resides in the serverless nature of Neon. In Neon, read replicas are ephemeral: they don’t need their own copy of storage, they’re ready instantly when you need them, and they scale down to zero when traffic slows down. They’re much more affordable than traditional read replicas – take advantage of them.
Insert data in batches
One of the most simple yet effective strategies for improving ingestion rates is batch insert operations. Instead of inserting rows one by one, group them into larger transactions.
For example, instead of executing individual INSERT
statements for each row:
Combine them into a single statement:
Clean up old indexes
One last tip. In the section about query performance, we covered why creating new indexes is sometimes a good idea – but with databases, it’s wise to always follow a Goldilocks approach. It’s also important to mention how having many indexes can impact your insert operations, since every time you write data, Postgres has to update all indexes associated with the table, which might introduce latency.
A good practice is to regularly evaluate the indexes on your tables (\d table_name
) to ensure they are still essential for your queries, deleting the indexes that are no longer needed.
Safely testing performance improvements
Testing performance improvements in production databases can be challenging due to the risks involved. Neon makes it easy with database branching: you can create a full, isolated copy of your database (data and schema) in a separate branch, where you can experiment freely without the risk of disrupting your live operations.
To create a testing branch, you can use Neon’s CLI (or the API), specifying your production branch as the source:
This creates a new, instantly available database environment that mirrors your production data and structure at the moment of branching, where you can modify indexes, adjust queries, and experiment with different configurations without affecting the production database.
Wrap up
We hope these strategies help you keep your Neon Postgres as fast as possible, so you can keep scaling your workloads at ease. Check out our docs for even more performance advice, and if you haven’t tried Neon yet, create a free account now.