Basic Nodes Assignment
Introduction
In this assignment, you will create a Node-RED workflow that makes use of the inject
, debug
, switch
, change
, template
, and function
nodes. The goal is to deepen your understanding of these nodes and their capabilities.
Assignment
Inject Node and Data Generation
Add an
inject
node to your flow.Configure the
inject
node. Add a couple of new properties to themsg
object:msg.timestamp
and assign it to a Unix timestampmsg.name
and assign it to your first name
Switch Node
Integrate a
switch
node after theinject
node.Configure the
Switch
node to stop after the first match and route messages based on the following conditions:Check if the
msg.timestamp
is evenly divided by 2 → route to output 1Otherwise → output 2
Hint: You can use JSONata expression for the evaluation.
Change Node
Connect the outputs of the
Switch
node toChange
nodes.In the
Change
nodes, set themsg.parity
to the stringsodd
oreven
depending on whether the number is odd or even
Template Node
Incorporate a
Template
node after eachChange
node.Configure the
Template
nodes to set themsg.payload
to the following JSON format:
{
"name": msg.name,
"unixTimestamp": msg.timestamp,
"parity": msg.parity
}
Please make sure that the msg.timestamp
is formatted as a number, and not as a string with "
around it.
Function Node
Connect each one of the
Template
nodes to a separatefunction
node.In the
Function
node:convert the Unix timestamp to UTC time
using a string literal, assign the
description
constant variable toThis flow has been triggered in HH:MM:SS on DD.MM.YYYY by NAME
replace the
HH:MM:SS
with the converted timereplace
DD.MM.YYYY
with the converted datereplace
NAME
with the value frommsg.name
At the end of the function node, set the status as Success.
Testing and Debugging of The Flow
Place
debug
nodes after eachfunction
nodesSet the output to
complete message
Set a message counter to each
debug
nodeMonitor the debug sidebar and verify that everything is according to the requirements
The output of the flow should be a msg.payload
of the following format:
{
"name": msg.name,
"unixTimestamp": msg.timestamp,
"parity": msg.parity,
"description": description
}
And your flow should look similar to the one below:
Please note the format of the msg
object in the debug tab from the screenshot above. Notice how the function
node visually represents the Success
status just below it, and how the debug nodes feature a counter
, indicating the number of times they have been executed.
Optional Challenge (Extra Credit)
The following optional challenge includes calculating the tokens consumed to encode the description
and adding them to the msg.payload
.
Tokenization
For machines and NLP (Natural Language Processing) LLM (Large Language Model) models, such as BERT (Bidirectional Encoder Representations from Transformers) or GPT (Generative Pre-trained Transformer), etc., to comprehend human language, a crucial initial step involves the conversion of written words into numerical representations, as computers operate with binary represented data.
This initial step, known as tokenization, forms the foundation of (NLP) endeavors. Tokenization entails the segmentation of a given text into discrete units referred to as tokens. These tokens can encompass both words and punctuation marks. Subsequently, these tokens are further transformed into numerical vectors, serving as mathematical representations of the words they represent.
To make sense of these numerical values, we use a special type of computer program called a deep learning model, often a transformer. This model is trained using the numerical vectors obtained through tokenization, enabling it to understand the complexities of word meanings and their contextual relationships.
The ultimate objective is to allow NLP models with the capability to comprehend the semantics and connotations of various words and their contextual placement within sentences or texts. This, in turn, enhances the NLP model's proficiency in understanding and processing human language.
That’s why a lot of LLM (Large Language Models) like ChatGPT have a strict limit of the number of tokens consumed both by the prompt and by the output of the model and also have a fixed pricing per token.
Encodings specify how text is converted into tokens. Different models use different encodings. tiktoken
supports three encodings used by OpenAI models:
Encoding name | OpenAI models |
---|---|
|
|
| Codex models, |
| GPT-3 models like |
Import thetiktoken npm package to your function node.
Use the
tiktoken
package to calculate the number of tokens needed for this description using thecl100k_base
encoder.Extend the
msg.payload
with a new property calledtokens
. Assign to it the calculated number of tokens necessary for encoding thedescription
.After completing the encoding process, do not forget to free the tiktoken encoder.
A simple web app implementation of tiktoken
can be found here: https://tiktokenizer.vercel.app/
Tokens Decoding
Please note that you can use the TextDecoder()
constructor to decode the generated tokens, but this is out of the scope of this exercise.
{
"name": msg.name,
"unixTimestamp": msg.timestamp,
"parity": msg.parity,
"description": description,
"tokens": tokens
}
Submission
Export your Node-RED flow as a JSON file.
Share the JSON file and description as your submission.
Best of luck!