Skip to main content
Skip table of contents

Basic Nodes Assignment

Objective

In this assignment, you will create a Node-RED workflow that makes use of the inject, debug, switch, change, template, and function nodes. The goal is to deepen your understanding of these nodes and their capabilities.

Task

Inject Node and Data Generation

  1. Add an inject node to your flow.

  2. Configure the inject node. Add a couple of new properties to the msg object:

    1. msg.timestamp and assign it to a Unix timestamp

    2. msg.name and assign it to your first name

  3. Configure the inject node to send the timestamp every 11 seconds

Switch Node

  1. Integrate a switch node after the inject node.

  2. Configure the Switch node to stop after first match and route messages based on the following conditions:

    1. Check if the msg.timestamp is evenly divided by 2 → route to output 1

    2. Otherwise → output 2

Hint: You can use JSONata expression for the evaluation.

Change Node

  1. Connect the outputs of the Switch node to Change nodes.

  2. In the Change nodes, set the msg.parity to the strings odd or even depending on whether the number is odd or even

Template Node

  1. Incorporate a Template node after each Change node.

  2. Configure the Template nodes to set the msg.payload to the following JSON format:

JSON
{
  "name": msg.name,
  "unixTimestamp": msg.timestamp,
  "parity": msg.parity
}

Please make sure that the msg.timestamp is formatted as a number, and not as a string with " around it.

Function Node

  1. Connect each one of the Template nodes to a separate function node.

  2. In the Function node:

    1. convert the Unix timestamp to UTC time

    2. using a string literal, assign the description constant variable to This flow has been triggered in HH:MM:SS on DD.MM.YYYY by NAME

      1. replace the HH:MM:SS with the converted time

      2. replace DD.MM.YYYY with the converted date

      3. replace NAME with the value from msg.name

  3. At the end of the function node, set the status as Success.

Testing and Debugging of The Flow

  1. Place debug nodes after each function nodes

  2. Set the output to complete message

  3. Set a message counter to each debug node

  4. Monitor the debug sidebar and verify that everything is according to the requirements

The output of the flow should be a msg.payload of the following format:

JSON
{
  "name": msg.name,
  "unixTimestamp": msg.timestamp,
  "parity": msg.parity,
  "description": description
}

And your flow should look similar to the one below:

Basic Nodes Assignment Solution

Please note the format of the msg object in the debug tab from the screenshot above. Notice how the function node visually represents the Success status just below it, and how the debug nodes feature a counter, indicating the number of times they have been executed.

Optional Challenge (Extra Credit)

The following optional challenge includes calculating the tokens consumed to encode the description and adding them to the msg.payload.

Tokenization

In order for machines and NLP (Natural Language Processing) LLM (Large Language Model) models, such as BERT (Bidirectional Encoder Representations from Transformers) or GPT (Generative Pre-trained Transformer), etc., to comprehend human language, a crucial initial step involves the conversion of written words into numerical representations, as computers operate with binary represented data.

This initial step, known as tokenization, forms the foundation of (NLP) endeavors. Tokenization entails the segmentation of a given text into discrete units referred to as tokens. These tokens can encompass both words and punctuation marks. Subsequently, these tokens are further transformed into numerical vectors, serving as mathematical representations of the words they represent.

To make sense of these numerical values, we use a special type of computer program called a deep learning model, often a transformer. This model is trained using the numerical vectors obtained through tokenization, enabling it to understand the complexities of word meanings and their contextual relationships.

The ultimate objective is to allow NLP models with the capability to comprehend the semantics and connotations of various words and their contextual placement within sentences or texts. This, in turn, enhances the NLP model's proficiency in understanding and processing human language.

That’s why a lot of LLM (Large Language Models) like ChatGPT have a strict limit of the number of tokens consumed both by the prompt and by the output of the model and also have a fixed pricing per token.

Encodings specify how text is converted into tokens. Different models use different encodings. tiktoken supports three encodings used by OpenAI models:

Encoding name

OpenAI models

cl100k_base

gpt-4, gpt-3.5-turbo, text-embedding-ada-002

p50k_base

Codex models, text-davinci-002, text-davinci-003

r50k_base (or gpt2)

GPT-3 models like davinci

  1. Import the @dqbd/tiktoken package to your function node

  2. Use the tiktoken package to calculate the number of tokens needed for this description using the cl100k_base encoder.

  3. Extend the msg.payload with a new property called tokens. Assign to it the calculated number of tokens necessary for encoding the description.

A simple web app implementation of tiktoken can be found here: https://tiktokenizer.vercel.app/

Tokens Decoding

Please note that you can use the TextDecoder() constructor to decode the generated tokens, but this is out of the scope of this exercise.

JSON
{
  "name": msg.name,
  "unixTimestamp": msg.timestamp,
  "parity": msg.parity,
  "description": description,
  "tokens": tokens
}

Tokens Calculation

Submission

  1. Export your Node-RED flow as a JSON file.

  2. Share the JSON file and description as your submission.

Best of luck!

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.