Pre-requisites
- A personal access token with the
streams:writescope - A Speckle project, where you have a
Can editrole
Get your personal access token from: Avatar → Profile → Personal Access Tokens in the Speckle web app.
Authorization:
This guide requires a Speckle server with the new file importer feature flag enabled. This is available by default on app.speckle.systems. For self-hosted instances, please ensure the
FF_NEXT_GEN_FILE_IMPORTER_ENABLED and FF_LARGE_FILE_IMPORTS_ENABLED feature flags are enabled.Overview
Uploading a file and triggering Speckle’s automatic ingestion pipeline is a multi-step process:-
Ask Speckle Server for a presigned upload URL
(GraphQL mutation:
fileUploadMutations.generateUploadUrl) - Upload the file directly to blob storage (simple HTTP PUT to the presigned URL)
-
Create a model (if needed)
(GraphQL mutation:
modelMutations.create- optional if you already have a model) -
Tell Speckle to parse and ingest the file
(GraphQL mutation:
fileUploadMutations.startFileIngestion) - Check the ingestion status (GraphQL query for the ingestion progress)
- Large files are never sent through the Speckle Server REST endpoints.
- The upload URL points to your server’s configured blob storage (S3, S3 compatible, or Azure).
- The ETag returned from Blob storage is required for step 4 (file ingestion).
- Successful ingestion results in a new Model Version.
Step 1: Generate an upload URL
We start by notifying the Speckle Server of our intent to upload a file. The server responds with a URL, which we will use to upload the file in a subsequent step, and an unique ID for the expected file. POST/graphql
Mutation:
fileId from the response - you must use this exact value in Step 4 when triggering the file ingestion. Do not use the filename; use the fileId returned here.
Step 2: Upload the file to the presigned URL
PUT{presignedURL}
Headers required:
- Blob storage returns an ETag in the headers of the response.
- The ETag is an unique identifier for the uploaded file content, which Speckle uses to verify the file integrity before ingesting the file.
- The ETag is required for the final step.
200 status code, the response headers will contain an ETag header. Record this value:
"\"ad13b92e173...\"").
Step 3: Create a model (if needed)
This step requires a target model, where the file data should be ingested to. If you don’t have a model yet, you can create one programmatically: POST/graphql
Mutation:
model.id as it will be used as modelId in the next step.
Step 4: Trigger the file ingestion
Once the file is uploaded, tell Speckle to parse, convert, index and create a new model version. POST/graphql
Mutation:
fileId must be the exact value returned from Step 1’s generateUploadUrl response. Do not use the filename - use the fileId from that response.
Response example:
status enum signals the job status where queued, processing, success, failed, and cancelled are possible values.
Your file is now in the ingestion pipeline. Once ingested, a new Version will appear under the referenced Model.
Step 5: Getting the ingestion status
Once the ingestion has been started, the file ingestion job receives an ingestion id fromfileUploadMutations.startFileIngestion.id. This id can be used to get the status of the file ingestion job.
POST /graphql
Query:
What developers need to know
File formats supported
File formats supported
For app.speckle.systems: IFC, DWG, DXF, OBJ, STL, 3DM, and others depending on server version.For self-hosted servers, IFC only.
Why are different formats supported in self-hosted vs cloud?
Why are different formats supported in self-hosted vs cloud?
Self-hosted servers use open-source code dependencies, which limits the file formats that can be supported. Formats like DWG, DXF, and 3DM require proprietary libraries that are not available in open-source distributions, so they are only available on app.speckle.systems.
Where did the StartFileImport mutation go?
Where did the StartFileImport mutation go?
The
StartFileImport mutation has been deprecated and replaced with StartFileIngestion. The new mutation provides more granular status updates, and allows subscribing to real-time updates. The Ingestion queries are common across all ingestion processes, whether initiated by the file import service or by other Speckle integrations.Please update your code to use StartFileIngestion and the new Ingestion status queries.Why is this not REST?
Why is this not REST?
Because:
- Files should go direct to blob storage for performance and scale.
- GraphQL mutations model “actions” better than REST for asynchronous workflows.
- GraphQL subscriptions allow real-time updates on the ingestion status, which is not possible with REST.
Should I ever use the old REST upload?
Should I ever use the old REST upload?
For app.speckle.systems, no. The old REST upload is gone.For self-hosted instances, the old REST upload may still be necessary if:
- Your server does not have a publicly available S3 service configured
- The feature flag
FF_LARGE_FILE_IMPORTS_ENABLEDis not enabled
/api/file/...).
However, we recommend configuring S3 storage and enabling the feature flag to use the modern file upload workflow described above.