LLM Prompting & AI‑IDE Workflow for dFL¶
dFL is designed to work closely with AI coding assistants. Instead of hand‑coding every data provider and custom feature, you can use an AI‑IDE (we recommend Cursor) to turn a short, structured description of your dataset into a custom dFL app.
This page walks through a simple, repeatable workflow:
- Fill out
APP_Creation.mdfor your dataset. - Paste those answers into your AI‑IDE’s prompt panel and attach the technical context.
- Let the LLM generate the provider and test script.
- Iterate to add optional features (normalizing, smoothing, graphers, autolabelers).
- Run the test script and load the provider into dFL.
Dataset‑specific demos can be found at Demos & Examples. Here we focus on the interaction pattern with the LLM, not on any particular dataset.
Demo: LLM Prompting Workflow (Video)¶
The following short video walks through this workflow in practice, using an AI‑IDE (Cursor) to:
- Fill out
APP_Creation.mdfor a sample dataset. - Prompt an LLM to generate a new data provider and test script.
- Add a custom normalizer, smoother, grapher, and autolabeler.
- Run the test script and load the new provider into dFL.
Step 1 – Fill out APP_Creation.md¶
APP_Creation.md is the “requirements form” for your custom app. Before opening an LLM, fill in at least the Required Information section for your dataset:
- App directory name:
APP_DIR_NAME(e.g.,weather_app,robotics_data). - Display name:
APP_DISPLAY_NAME(e.g., “Weather Analysis”). - Data folder absolute path:
DATA_FOLDER_ABSOLUTE_PATH. - File formats:
FILE_FORMATS(parquet / csv / pickle / other). - Record ID definition:
RECORD_ID_DEFINITION(e.g., “filenames without extension”). - Time axis:
IS_DATE= True or False.- If True:
TIME_INDEX_OR_COLUMN,TIMEZONE_OR_NAIVE. - If False:
TIME_INDEX_OR_COLUMN,NUMERIC_TIME_UNITS. - Additional sources (optional):
ADDITIONAL_SOURCES(APIs, extra folders, etc.).
You can also fill in Optional Features if you already know you want:
- Custom smoothing (
SMOOTHING_OPTIONS_TO_INCLUDE). - Custom normalizing (
NORMALIZATION_OPTIONS_TO_INCLUDE). - Custom graphers (
CUSTOM_GRAPHERS_TO_INCLUDE). - Autolabeling logic and label taxonomy (
AUTO_LABELING,LABEL_TAXONOMY).
Keep APP_Creation.md open; you will copy from it into your first prompt.
Step 2 – Start an LLM session in your AI‑IDE¶
Open your AI‑IDE (e.g., Cursor) in the repository that contains the dFL docs and your future app:
- Make sure it can see:
APP_Creation.md(your filled‑out answers).context.md(the detailed technical reference).
Then open the AI / Chat panel and write a prompt that:
- States your goal (create a dFL data provider).
- Includes your filled‑out
APP_Creation.mdanswers. - Tells the LLM to follow the dFL patterns from the technical context.
Example initial prompt:
I want to create a production‑ready data provider for the dFL application.
Here are my answers from APP_Creation.md (Required Information + any Optional Features I care about):
[paste the filled‑out sections from APP_Creation.md here]
Please:
1. Read these requirements.
2. Use the dFL technical patterns (fetch_record_ids_for_dataset_id, fetch_data, get_provider, trimming, error handling, etc.) from the project’s technical reference.
3. Propose a very short plan for the provider file you will generate.
4. Then generate the provider and a small test script that I can run with Python to validate it.
You don’t need to repeat the entire context.md file in the prompt; the AI‑IDE can read it directly from the repo.
Step 3 – Let the LLM generate the provider and test script¶
After you send the initial prompt, the LLM should:
-
Create the provider module, for example
apps/[APP_DIR_NAME]/[APP_DIR_NAME]_data_provider.py, that:- Implements the required functions (
fetch_record_ids_for_dataset_id,fetch_data,get_provider). - Uses your
DATA_FOLDER_ABSOLUTE_PATH, time axis type, and record ID definition.
- Implements the required functions (
-
Create a small test script, for example
test_[APP_DIR_NAME]_provider.py, that:- Imports the provider module.
- Calls
get_provider()and checks that required keys are present. - Calls
fetch_record_ids_for_dataset_id()andfetch_data()for at least one record.
Review the generated files in your IDE like you would review a pull request:
- Check that paths and dataset IDs match what you wrote in
APP_Creation.md. - Skim for clear error handling (no crashes if files are missing, etc.).
If anything looks off, tell the LLM exactly what to fix (“use this absolute data path instead”, “treat time as numeric seconds”, etc.) and let it update the code.
Step 4 – Add optional features with small follow‑up prompts¶
Once the basic provider works, you can add optional features one at a time with short prompts. For each feature type:
-
Custom normalizing
Ask the LLM to:
- Add a normalization function (e.g., robust scaling) with the correct function shape.
- Register it in the provider’s
custom_normalizing_optionsso it appears in the GUI’s Normalizing controls.
-
Custom smoothing
Ask the LLM to:
- Add a smoothing function (e.g., simple moving average).
- Register it in
custom_smoothing_options.
-
Custom graphers
Ask the LLM to:
- Add a Plotly‑based grapher function that respects the theme and uses valid colorscales.
- Register it in
custom_grapher_dictionaryso it shows up as a graph type in dFL.
-
Autolabelers
Ask the LLM to:
- Implement a function that generates label dictionaries (
T1,T2, plus your labels). - Register it in
auto_label_function_dictionaryand updateall_labels.
- Implement a function that generates label dictionaries (
Use simple follow‑ups like:
Now that the basic provider works, please:
- Add a robust scaling normalizer and wire it into the provider so it appears as a normalization option in the dFL GUI.
or:
Now add a simple threshold‑based autolabeler for my main signal, and register it so it shows up in the Autolabeler tab.
Always keep each request small and focused; it makes the results easier to test and review.
Step 5 – Run the test script¶
When the provider and any optional features are in place:
- Open a terminal in your project environment.
- Run the generated test script, e.g.:
- Read the output:
- If all checks pass, you’re ready to load the provider into dFL.
- If something fails, copy the error message back into the LLM chat and say:
- “Here is the traceback from running
test_my_app_provider.py. Please fix the provider so this test passes.”
- “Here is the traceback from running
Let the LLM update the code, then rerun the test until everything passes.
Step 6 – Load the provider into dFL¶
Once the test script passes:
- Open dFL.
- Go to Settings → Data Source.
- Browse to and select your new provider file (e.g.,
apps/[APP_DIR_NAME]/[APP_DIR_NAME]_data_provider.py). - Confirm the load.
You should now see:
- Your dataset’s records available in the control panel.
- Any custom normalizers and smoothers in the Graph Configuration controls.
- Any custom graphers in the graph type selector.
- Any autolabelers in the Autolabeler tab.
From here, you can continue to refine the provider by:
- Editing the code directly, or
- Going back to the LLM and asking for targeted improvements (“optimize performance”, “add another grapher”, “extend the label taxonomy”, etc.).
Summary: The core interaction pattern¶
- Prepare: Fill in
APP_Creation.mdso the LLM has a complete picture of your dataset and requirements. - Prompt: Paste those answers into an AI‑IDE chat, state your goal, and ask for a provider + test script using the dFL patterns.
- Iterate: Run the test script, fix issues with the LLM, and add optional features in small, focused steps.
- Integrate: Load the resulting provider into dFL and use your new app, normalizers, smoothers, graphers, and autolabelers inside the GUI.
Following these steps, you can reliably use LLMs and AI‑IDEs to build custom dFL apps without needing to write every line of Python by hand.
Th