Reference

Field Processor

A field processor allows you to analyze uploaded data and dynamically modify field definitions before the mapping step begins. This is useful when you need to add fields, adjust configurations, or make data-driven decisions about your field schema.

1{
2 fieldProcessor: function(context) {
3 const { fields, headers, records, meta } = context;
4 
5 // Return modified fields object
6 return {
7 ...fields,
8 newField: { label: 'New Field' }
9 };
10 }
11}

The field processor runs before the mapping step, so any changes you make will be available to the mapper and visible to users during the mapping process.

Parameters

The field processor receives a context object with the following properties:

context.fields

Description An object containing all field definitions from your configuration. Keys are field names, values are field definitions with labels, transformers, validators, etc.
Type { [key: string]: ImportField }

context.headers

Description An array including all the header names included in the uploaded file
Type string[]

context.records

Description An array of preview records (first few rows) from the uploaded file. These are read-only and used for analysis purposes only.
Type ImportRecord[]

context.meta

Description Metadata about the uploaded file including size, name, type, and whether it has a header row
Type { importok: { fileSize: number; fileName: string; fileType: string; withHeader: boolean }, [key: string]: any }

Return Value

The field processor should return a fields object containing any modifications, additions, or removals you want to make. You can use the spread operator to start with the existing fields and then modify them:

1// Return modified fields object
2return {
3 ...fields,
4 newField: {
5 label: "New Field",
6 description: "Added based on uploaded data"
7 },
8 firstName: {
9 ...fields.firstName,
10 validators: "required"
11 }
12 // Note: omitted fields will be removed
13};

Examples

Here is an example of how you can fields to the import configuration dynamically based on the uploaded data.

1{
2 fieldProcessor: (context) => {
3 const { fields, headers } = context;
4 const modifiedFields = { ...fields };
5 
6 // Get existing field names
7 const existingFields = Object.keys(fields);
8 
9 // Add a field for each header that doesn't have a corresponding field
10 headers.forEach(header => {
11 const fieldExists = existingFields.some(fieldName =>
12 fieldName.toLowerCase() === header.toLowerCase()
13 );
14 
15 if (!fieldExists) {
16 const fieldName = header.toLowerCase().replace(/[^a-z0-9]/g, '');
17 
18 modifiedFields[fieldName] = {
19 label: header,
20 description: `Dynamically added for column: ${header}`
21 };
22 }
23 });
24 
25 return modifiedFields;
26 }
27}

Best Practices

Performance Considerations

Field processors execute on every file upload, making performance optimization crucial for maintaining a responsive user experience. Since processing happens during the upload step, any delays directly impact how quickly users can proceed to the mapping phase of their import workflow.

Please note that context.records is a preview which typically contains the first few 10 rows of data, which is usually sufficient for making informed decisions about field structure, data patterns, and validation requirements.

Field Naming

When dynamically generating field names from uploaded headers, it is recommended to create clear, documented conventions that produce readable and meaningful field identifiers. This includes standardizing case conversion (camelCase, snake_case, or kebab-case), handling special characters and spaces.

You should also consider using consistent prefixes or suffixes for dynamically generated fields (e.g., dynamic_, auto_) to clearly distinguish them from static field definitions and make troubleshooting easier during development and debugging.

Error Handling

ImportOK automatically handles errors in field processors to ensure that failures don't disrupt the import workflow. The system wraps all field processor calls in try-catch blocks, logging any errors to the console while gracefully falling back to the original field configuration when processing fails.

Integration with Mapping

Field processors and mappers work together in a coordinated workflow to provide dynamic field management and column mapping. The field processor executes first, allowing you to analyze the uploaded data and modify the field schema before any mapping decisions are made.

When your field processor returns modified fields (including any dynamically added fields), these updated field definitions become available to the mapping strategy. The mapper receives the complete processed field set and can access all field properties including labels, descriptions, validators, and transformers for both original and dynamically generated fields. However, the mapping strategy retains full control over how these fields are matched to the uploaded columns.

For example, if your field processor adds a field called dynamicEmail based on detecting an email column in the uploaded data, the mapper can choose to map this field to the appropriate header using whatever logic it implements - whether that's fuzzy matching, exact matching, AI-powered mapping, or custom heuristics.

This separation of concerns allows field processors to focus on schema adaptation while mappers concentrate on column matching. The result is a flexible system where field processors can react to data patterns and add appropriate fields, while mappers can make decisions about which uploaded columns should populate which fields in the final import.

Start typing to search documentation and articles...

⌘K or Ctrl+K to open search

No results found for ""

Try different keywords or check your spelling.

Use ↑ ↓ arrow keys to navigate and Enter to select