Skip to main content
There is no dedicated REST endpoint for uploading IFC, DWG, OBJ, or any other file types. The supported programmatic workflow is a hybrid GraphQL + REST (or S3) flow. This section documents the entire mechanism so developers do not have to reverse engineer the server.

Pre-requisites

  • A personal access token with the streams:write scope
  • A Speckle project, where you have a Can edit role
Get your personal access token from: Avatar → Profile → Personal Access Tokens in the Speckle web app.
For all GraphQL mutations, make sure to send the token in as a request header named Authorization:
Authorization: Bearer YOUR_TOKEN
This guide requires a Speckle server with the new file importer feature flag enabled. This is available by default on app.speckle.systems. For self-hosted instances, please ensure the FF_NEXT_GEN_FILE_IMPORTER_ENABLED and FF_LARGE_FILE_IMPORTS_ENABLED feature flags are enabled.

Overview

Uploading a file and triggering Speckle’s automatic ingestion pipeline is a multi-step process:
  1. Ask Speckle Server for a presigned upload URL (GraphQL mutation: fileUploadMutations.generateUploadUrl)
  2. Upload the file directly to blob storage (simple HTTP PUT to the presigned URL)
  3. Create a model (if needed) (GraphQL mutation: modelMutations.create - optional if you already have a model)
  4. Tell Speckle to parse and ingest the file (GraphQL mutation: fileUploadMutations.startFileIngestion)
  5. Check the ingestion status (GraphQL query for the ingestion progress)
Important notes:
  • Large files are never sent through the Speckle Server REST endpoints.
  • The upload URL points to your server’s configured blob storage (S3, S3 compatible, or Azure).
  • The ETag returned from Blob storage is required for step 4 (file ingestion).
  • Successful ingestion results in a new Model Version.

Step 1: Generate an upload URL

We start by notifying the Speckle Server of our intent to upload a file. The server responds with a URL, which we will use to upload the file in a subsequent step, and an unique ID for the expected file. POST /graphql Mutation:
mutation GenerateFileUploadUrl($input: GenerateFileUploadUrlInput!) {
  fileUploadMutations {
    generateUploadUrl(input: $input) {
      url
      fileId
    }
  }
}
Variables example:
{
  "input": {
    "fileName": "MyModel.ifc",
    "projectId": "your-project-id"
  }
}
Response example:
{
  "data": {
    "fileUploadMutations": {
      "generateUploadUrl": {
        "url": "https://your-s3-endpoint.example.org/presigned-upload-url",
        "fileId": "file-id-to-use-in-step-3"
      }
    }
  }
}
Important: Save the fileId from the response - you must use this exact value in Step 4 when triggering the file ingestion. Do not use the filename; use the fileId returned here.

Step 2: Upload the file to the presigned URL

PUT {presignedURL} Headers required:
Content-Type: application/octet-stream   # or appropriate type
Body:
<raw file bytes>
Important:
  • Blob storage returns an ETag in the headers of the response.
  • The ETag is an unique identifier for the uploaded file content, which Speckle uses to verify the file integrity before ingesting the file.
  • The ETag is required for the final step.
Example curl:
curl -X PUT \
  -H "Content-Type: application/octet-stream" \
  --data-binary @MyModel.ifc \
  "https://your-s3-endpoint.example.org/presigned-upload-url"
If the response is a successful 200 status code, the response headers will contain an ETag header. Record this value:
ETag: "ad13b92e173..."
Note: When using this ETag in Step 4, it must be passed as a double-quoted string (e.g., "\"ad13b92e173...\"").

Step 3: Create a model (if needed)

This step requires a target model, where the file data should be ingested to. If you don’t have a model yet, you can create one programmatically: POST /graphql Mutation:
mutation CreateModel($input: CreateModelInput!) {
  modelMutations {
    create(input: $input) {
      id
    }
  }
}
Variables example:
{
  "input": {
    "projectId": "your-project-id",
    "name": "model name"
  }
}
Response example:
{
  "data": {
    "modelMutations": {
      "create": {
        "id": "the-target-model-id"
      }
    }
  }
}
Store the model.id as it will be used as modelId in the next step.

Step 4: Trigger the file ingestion

Once the file is uploaded, tell Speckle to parse, convert, index and create a new model version. POST /graphql Mutation:
mutation StartFileIngestion($input: StartFileImportInput!) {
  fileUploadMutations {
    startFileIngestion(input: $input) {
      id
      statusData {
        status
      }
    }
  }
}
Variables example:
{
  "input": {
    "etag": "\"ad13b92e173...\"",
    "fileId": "file-id-from-step-1-response",
    "modelId": "the-target-model-id",
    "projectId": "your-project-id"
  }
}
Important: The fileId must be the exact value returned from Step 1’s generateUploadUrl response. Do not use the filename - use the fileId from that response. Response example:
{
  "data": {
    "fileUploadMutations": {
      "startFileIngestion": {
        "id": "your-file-ingestion-id",
        "statusData": {
          "status": "queued"
        }
      }
    }
  }
}
Note: The status enum signals the job status where queued, processing, success, failed, and cancelled are possible values. Your file is now in the ingestion pipeline. Once ingested, a new Version will appear under the referenced Model.

Step 5: Getting the ingestion status

Once the ingestion has been started, the file ingestion job receives an ingestion id from fileUploadMutations.startFileIngestion.id. This id can be used to get the status of the file ingestion job. POST /graphql Query:
query Ingestion($ingestionId: ID!, $projectId: String!) {
  project(id: $projectId) {
    ingestion(id: $ingestionId) {
      statusData {
        ... on ModelIngestionQueuedStatus {
          status
          progressMessage
        }
        ... on ModelIngestionSuccessStatus {
          status
          versionId
        }
        ... on ModelIngestionProcessingStatus {
          status
          progressMessage
        }
        ... on ModelIngestionFailedStatus {
          status
          errorReason
        }
      }
    }
  }
}
Variables example:
{
  "ingestionId": "your-file-ingestion-id",
  "projectId": "your-project-id"
}
Response example:
{
  "data": {
    "project": {
      "ingestion": {
        "statusData": {
          "status": "success",
          "versionId": "the-new-version-id-created-from-the-import"
        }
      }
    }
  }
}
In case of an error, the error response would look like:
{
  "data": {
    "project": {
      "ingestion": {
        "statusData": {
          "status": "failed",
          "errorReason": "the reason for the failure"
        }
      }
    }
  }
}

What developers need to know

For app.speckle.systems: IFC, DWG, DXF, OBJ, STL, 3DM, and others depending on server version.For self-hosted servers, IFC only.
Self-hosted servers use open-source code dependencies, which limits the file formats that can be supported. Formats like DWG, DXF, and 3DM require proprietary libraries that are not available in open-source distributions, so they are only available on app.speckle.systems.
The StartFileImport mutation has been deprecated and replaced with StartFileIngestion. The new mutation provides more granular status updates, and allows subscribing to real-time updates. The Ingestion queries are common across all ingestion processes, whether initiated by the file import service or by other Speckle integrations.Please update your code to use StartFileIngestion and the new Ingestion status queries.
Because:
  • Files should go direct to blob storage for performance and scale.
  • GraphQL mutations model “actions” better than REST for asynchronous workflows.
  • GraphQL subscriptions allow real-time updates on the ingestion status, which is not possible with REST.
For app.speckle.systems, no. The old REST upload is gone.For self-hosted instances, the old REST upload may still be necessary if:
  • Your server does not have a publicly available S3 service configured
  • The feature flag FF_LARGE_FILE_IMPORTS_ENABLED is not enabled
In these cases, you may need to use the legacy REST endpoints (/api/file/...). However, we recommend configuring S3 storage and enabling the feature flag to use the modern file upload workflow described above.
Last modified on February 20, 2026