Streaming
By the end of this guide, you'll know how to handle Plaza's streaming responses without blowing up your client's memory.
When it happens
Any query returning more than 1,000 features switches to chunked transfer encoding automatically. You don't opt in. Smaller responses come back as a normal JSON payload.
What you receive
Newline-delimited JSON. Each line is a complete, parseable JSON object:
HTTP/1.1 200 OK
transfer-encoding: chunked
content-type: application/geo+json
{"type":"Feature","geometry":{...},"properties":{...}}
{"type":"Feature","geometry":{...},"properties":{...}}
{"type":"Feature","geometry":{...},"properties":{...}}
...
Parse line by line as data arrives. Don't buffer the whole thing and try to JSON.parse() it at once -- that defeats the point.
Client code
curl
curl -N "https://plaza.fyi/api/v1/elements?bbox=2.2,48.8,2.4,48.9&tags=building" \
-H "x-api-key: pk_live_YOUR_KEY" \
| while IFS= read -r line; do
echo "Got feature: $(echo "$line" | jq -r '.properties.osm_id')"
done
The -N flag disables curl's output buffering so you see features as they arrive.
JavaScript
const response = await fetch(
"https://plaza.fyi/api/v1/elements?bbox=2.2,48.8,2.4,48.9",
{ headers: { "x-api-key": "pk_live_YOUR_KEY" } }
);
const reader = response.body.getReader();
const decoder = new TextDecoder();
let buffer = "";
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split("\n");
buffer = lines.pop(); // incomplete line stays in the buffer
for (const line of lines) {
if (line.trim()) {
const feature = JSON.parse(line);
console.log(feature.properties.osm_id);
}
}
}
The key detail: lines.pop() keeps the last (potentially incomplete) chunk in the buffer. Only parse complete lines.
Python
import requests
import json
resp = requests.get(
"https://plaza.fyi/api/v1/elements",
params={"bbox": "2.2,48.8,2.4,48.9", "tags": "building"},
headers={"x-api-key": "pk_live_YOUR_KEY"},
stream=True
)
for line in resp.iter_lines():
if line:
feature = json.loads(line)
print(feature["properties"]["osm_id"])
stream=True is the important bit. Without it, requests buffers the entire response into memory before you can iterate.
All output formats stream
| Format | Content-Type | What each line looks like |
|---|---|---|
| GeoJSON | application/geo+json |
One Feature per line |
| JSON | application/json |
One object per line |
| CSV | text/csv |
Header row, then data rows |
| XML | application/xml |
OSM XML elements |
Set the format with ?format=csv (or whichever you want).
Timeouts
Streaming connections stay open for up to 5 minutes. If your query runs longer, the connection closes and you get a partial result. If you're hitting this limit, narrow your query -- tighter bounding box, more specific tag filters, or split the area into smaller chunks.
Backpressure
Plaza respects TCP backpressure. If your client reads slowly, the server pauses sending until you catch up. Neither side accumulates unbounded memory. You can process features at whatever pace your application needs without worrying about the server flooding you with data faster than you can handle it.
When not to stream
If you're fetching a few hundred features, streaming adds complexity for no benefit. The regular JSON response is fine. Streaming pays off when you're pulling thousands or millions of features -- building extracts, bulk analysis, feeding a data pipeline.