Datasets in Central are now Entity Lists, please help translate!

If you haven't already, make sure to provide feedback in our poll about updating entity lists via import, @tgachet! Central Entity uploads from file

If you're feeling really impatient, I have bulk added entities with a Python script like this one using pyodk:

from pyodk.client import Client
import csv
import json

client = Client()

# The filename/path of a CSV. It must contain a column __id with version 4 uuids and a label column with the desired entity labels
# Other column headers must exactly match the names of properties in the entity list specified below
ENTITIES_CSV = "participants.csv

# The ID of a project on your server that contains an entity list with name matching the name below

# The name of an existing entity list that you want to populate
ENTITY_LIST = "participants"

with open(ENTITIES_CSV) as entities_csv:
    csv_reader = csv.reader(entities_csv)

    header = next(csv_reader)
    for row in csv_reader:
        body = dict()
        body["data"] = dict()
        for item in list(zip(header, row)):
            if item[0] == "__id":
                body["uuid"] = item[1]
            elif item[0] == "label":
                body["label"] = item[1]
                body["data"][item[0]] = item[1]
        r ="/projects/{PROJECT_ID}/datasets/{ENTITY_LIST}/entities", json=body)
        if r.status_code != 200:
Quick script with explicit property names

import csv
import json
import uuid
from pyodk.client import Client

client = Client()

with open('entities.csv', encoding='utf-8-sig') as f:
reader = csv.DictReader(f)
for row in reader:
first_name = row['First']
last_name = row['Last']

    entity = {'uuid': str(uuid.uuid4()), 'label': first_name + " " + last_name,
                'data': {'first_name': first_name, 'last_name': last_name}}
    r ='projects/<projectid>/datasets/users/entities', json=entity)

You could also make it dynamically use the column header names (done above) and update entities in a similar way using this endpoint. Note that the API will continue using dataset!