/r/node

Photograph via //r/node

/r/node

262,549 Subscribers

1

Seeking Recommendations: Free Log Collection & Analysis Tools for Node.js

Hey, r/Node community! I'm on the lookout for tools to help with log collection and analysis for Node.js apps. Ideally, I'm interested in free options that also offer a UI for easier log management. How do you handle your log data? Are there specific tools or setups you'd recommend for effectively monitoring and analyzing application logs? I'd appreciate any insights or experiences you can share, especially if they help streamline the process!

0 Comments
2024/04/16
09:23 UTC

4

Where can I find more resources on mid to advanced level node?

Hey mates,

I'm looking for resources where I can learn more about clean architecture, dependency injection and entity separation?

I want the logic in my project to be separated. I wanna have separate User entities, separate interfaces for database calls and controllers etc. Unfortunately most of the things I can find are beginner-level.

Is there a good book or a youtube channel or just series of articles describing proper code structure and backend architecture?

2 Comments
2024/04/16
08:21 UTC

1

How to send patch requests in html?

I'm learning backend with express.js rn. I've made a few routes and a few post and get requests using the server. I was using forms to make post and get requestsm I later found that I can only make only those two requests using it. So how do I make other requests now? I've also heard something about using fetch for making requests too maybe? What is the "correct" way to do it?

2 Comments
2024/04/16
06:17 UTC

0

Can you please help by filling out my survey for research?

Hi everyone!
I created a survey for my thesis about security issues in frontend development, especially with npm packages and also how the open source community influence developers. 1-2 sentences is more than enough and I mainly look for new perspectives. This is the link for the survey: https://www.surveymonkey.com/r/CYFTRLP
Thank you for your time and patience! :)

0 Comments
2024/04/16
00:59 UTC

0

Creating an OpenAI RAG chatbot for a high-volume server.

My goal is to create a RAG chatbot using openAI. However, I need a chatbot per user. Therefore, my problem is memory management & performance. Every user has its own corpus of texts and the corpus embeddings, these will be used to create the context for questions. Now my problem is in caching/loading the corpus and embeddings, I am not sure if this is the correct way of implementing something like this.

The cache system is vlib.Cache which has ttl time to live and limit parameters, the limit parameter causes the removal of the oldest item when the limit has been reached.

I am aware of some undefined variables and libraries, but it is more of an architecture question in stead of if the code works.

The code is a bit large but any help would be very much appreciated!

Does this look like something that would be applicable in a high-volume web server? Or what could be better?

// ---------------------------------------------------------
// Chat class.
// Used to create an ongoing chat inside the assistant.

class Chat {
    constructor(assistant) {

        // Attributes.
        this.id = String.random(32);
        while (assistant.chats.has(this.id)) {
            this.id = String.random(32);
        }
        this.messages = [];
    }
}

// ---------------------------------------------------------
// Assistant class.

class Assistant {
    constructor({
        type = "assistant", // the assistant type for the database.
        embedding_model = "text-embedding-3-small", //"text-embedding-ada-002",
        gpt_model = "gpt-3.5-turbo-0125", 
        instruction = "You are a great assistant.",
        context_limit = 10,
        max_tokens = null, 
        api_key = process.env.OPENAI_API_KEY,
    } = {}) {

        // Arguments.
        this.type = type;
        this.embedding_model = embedding_model;
        this.gpt_model = gpt_model;
        this.max_tokens = max_tokens;
        this.instruction = instruction;
        this.context_limit = context_limit;

        // Attributes.
        this.openai = new OpenAI({ apiKey: api_key });
        this.token_usage = [];
        this.chats = new vlib.Cache({ttl: 36000 * 1000});
        this.faiss_cache = new vlib.Cache({limit: 100, ttl: 60 * 15 * 1000});
        this.corpus_cache = new vlib.Cache({limit: 100, ttl: 60 * 15 * 1000});
        this.file_storage = server.source.join(`ai/assistants/${type}/`)
        if (!this.file_storage.exists()) {
            this.file_storage.mkdir_sync()
        }
    }

    /*
    // Initialize corpus.
    async initialize_corpus() {

        // Initialize faiss index.
        const initialize_faiss_index = (embeddings) => {
            const dimension = embeddings[0].length;
            this.index = new IndexFlatL2(dimension);
            embeddings.iterate(i => {
                this.index.add(i)
            })
        }

        // Hash.
        const corpus_hash = crypto.createHash('sha256').update(JSON.stringify(this.corpus)).digest('hex');

        // Load embeddings.
        this.stored_embeddings = {};
        try {
            this.stored_embeddings = JSON.parse(await fs.readFile(this.embeddings_file, 'utf8'));
        } catch {
            this.stored_embeddings = {};
        }

        // Hash has changed.
        if (this.stored_embeddings.hash !== corpus_hash) {
            console.log('Generating new embeddings for the corpus');
            const embeddings = await this.generate_embeddings(this.corpus);
            this.stored_embeddings = { hash: corpus_hash, embeddings };
            await fs.writeFile(this.embeddings_file, JSON.stringify(this.stored_embeddings, null, 2));
            initialize_faiss_index(embeddings);
        }

        // Already embedded.
        else {
            console.log('Using cached embeddings');
            initialize_faiss_index(this.stored_embeddings.embeddings);
        }
    }
    */

    // Start the assistant.
    // - Periodically upload the token usage bucket.
    start() {

        // Submit tokens.
        this.token_usage_interval = setInterval(async () => {
            if (this.token_usage.length > 0) {

                // Atomically replace `this.token_usage` with a new array and store the old data for processing.
                // This will not cause any lost tokens when a process appends during the splicing.
                const token_usage = this.token_usage.splice(0, this.token_usage.length);

                // Map per week start.
                const date = new vlib.Date().week_start().sec();

                // Create bulk operations.
                const bulk_operations = [];
                for (let i = 0; i < token_usage.length; i++) {

                    const usage = token_usage[i];
                    let upload = {}
                    upload[`${date}.${usage.model}.prompt`] = usage.prompt;
                    upload[`${date}.${usage.model}.completion`] = usage.completion;
                    bulk_operations.append(
                        await server.db.openai_token_usage.save(
                            usage.uid,
                            {project: project.name, type: "document", name: file.name},
                            {$inc: upload},
                            {bulk: true, set: false},
                        )
                    )
                }

                // Submit to the database.
                await server.db.openai_token_usage.bulk_operations(bulk_operations);
            }

        }, 60 * 15 * 1000);
    }

    // Create the assistant.
    // This should only be called once.
    async create({uid, project}) {

        // Create the corpus.
        const project_obj = await Project.load({uid: uid, name: project});
        // const corpus_mem = process.memoryUsage();
        const documents = await project_obj.load_documents();
        const corpus = [];
        Object.values(documents).iterate(doc => {
            if (doc.data) {
                corpus.append(doc.data);
            }
        })
        // console.log(`Corpus memory: ${(process.memoryUsage().heapUsed - corpus_mem.heapUsed) / 1024 / 1024} MB`);

        // Generate the embeddings.
        const embeddings = await this.generate_embeddings(uid, corpus);

        // const faiss_mem = process.memoryUsage();

        // Initialize the faiss index.
        const dimension = embeddings[0].length;
        const faiss_index = new IndexFlatL2(dimension);
        embeddings.iterate(i => {
            faiss_index.add(i)
        })

        // console.log(`Faiss memory: ${(process.memoryUsage().heapUsed - faiss_mem.heapUsed) / 1024 / 1024} MB`);

        // Save the embeddings.
        let path = this.file_storage.join(`${uid}.${project}.faiss`).str();
        faiss_index.write(path);

        // Save the corpus.
        await this.file_storage.join(`${uid}.${project}.corpus`).save(JSON.stringify(corpus));
    }

    // Load embeddings.
    async load_embeddings(uid, project) {
        const id = `${uid}.${project}`;
        if (this.faiss_cache.has(id)) {
            return this.faiss_cache.get(id);
        }
        const faiss_index = IndexFlatL2.read(this.file_storage.join(`${uid}.${project}.faiss`).str())
        this.faiss_cache.set(id, faiss_index);
        return faiss_index;
    }

    // Load corpus.
    async load_corpus(uid, project) {
        const id = `${uid}.${project}`;
        if (this.corpus_cache.has(id)) {
            return this.corpus_cache.get(id);
        }
        const corpus = JSON.parse(await this.file_storage.join(`${uid}.${project}.corpus`).load());
        this.corpus_cache.set(id, corpus);
        return corpus;
    }

    // Generate embeddings.
    async generate_embeddings(uid, texts) {
        // This could be enhanced with caching individual text embeddings if required
        try {
            const response = await this.openai.embeddings.create({
                model: this.embedding_model,
                input: texts,
                encoding_format: "float",
            });
            this.token_usage.append({
                uid,
                model: this.embedding_model,
                prompt: response.usage.prompt_tokens,
                completion: 0,
            });
            return response.data.map(item => item.embedding);
        } catch (error) {
            throw error;
        }
    }

    // Extract embeddings.
    find_most_similar_embeddings(faiss_index, query_embedding) {
        const { labels } = faiss_index.search(query_embedding, this.context_limit);
        return labels.flat();
    }

    // Message.
    async message({uid, project, chat_id, query}) {
        try {

            // Create assistant.
            if (!this.file_storage.join(`${uid}.${project}.faiss`).exists()) {
                throw new Error("The defined assistant does not exist yet.");
                // await this.create();
            }

            // Load embeddings.
            const faiss_index = await this.load_embeddings(uid, project);

            // Load corpus.
            const corpus = await this.load_corpus(uid, project);

            // Retrieve or create chat.
            let chat;
            if (chat_id != null && this.chats.has(chat_id)) {
                chat = this.cache.get(chat_id);
            } else if (chat_id != null) {
                // chat has been removed since it was not used.
                chat = new Chat(this);
                chat_id = chat.id;
            } else {
                chat = new Chat(this);
                chat_id = chat.id;
            }

            // Generate embeddings for the query.
            const [query_embedding] = await this.generate_embeddings(uid, [query]);
            const similar_indices = this.find_most_similar_embeddings(faiss_index, query_embedding);

            // Use the corpus text corresponding to similar indices to form a context for the query.
            const context = similar_indices.map(index => corpus[index]).join("\n");
            const prompt = `Context:\n${context}\n\nQuestion:\n${query}`;

            // Add to messages.
            if (chat.messages.length === 0) {
                chat.messages.append({role: "system", content: this.instruction});
            }
            chat.messages.append({role: "user", content: prompt});

            // Fetch the completion using the GPT model
            const response = await this.openai.chat.completions.create({
                model: this.gpt_model,
                messages: chat.messages,
                max_tokens: this.max_tokens,
            });

            // Track tokens.
            this.token_usage.append({
                uid,
                model: this.gpt_model,
                prompt: response.usage.prompt_tokens,
                completion: response.usage.completion_tokens,
            });

            // Retrieve answer.
            const answer = response.choices[0].message.content.trim()

            // Add an answer to the chat messages.
            chat.messages.append({role: "assistant", content: answer});

            // Response.
            return {
                error: null,
                answer,
                chat_id,
            };

        }

        // Handle error.
        catch (error) {
            console.error(error);
            return {
                error: "An error occured while generating the response.",
                answer: null,
                chat_id,
            }
            throw error;
        }
    }
}
2 Comments
2024/04/15
20:52 UTC

2

Serverless manipulation

Hey,

I have a client with a barbershop business, I made him a web app with a scheduled appointment mechanism. Works well so far, recently he asked me to add an SMS reminder 3 hours before the appointment of the customer. I'm using express.js and deployed it with Vercel which is a serverless platform.

How can/should I implement it?

Is there a way to manipulate it?

Should I use the front end for this case?

3 Comments
2024/04/15
17:48 UTC

46

Is Fastify valid choice in 2024?

We are considering choosing some backend, do you think Fastify is a good or bad choice?

//edit thanks for replies! will choose Fastify then :D

42 Comments
2024/04/15
16:49 UTC

0

Why won't `ping interval: 1000` replace setInterval()? in socket.io?

I'm following these docs: https://socket.io/docs/v4/connection-state-recovery#usage.

And the example given: https://codesandbox.io/p/sandbox/github/socketio/socket.io/tree/main/examples/connection-state-recovery-example/esm?file=index.js

To test out the connection state recovery feature in socket.io but I'm confused because I thought socket.io handled ping pong emits automatically, so I figured setting pingInterval: 1000
would replace this line used in the example:

setInterval(() => {   io.emit("ping", new Date().toISOString()); }, 1000); 

But the recovery isn't working when using:

const io = new Server(server, { // for work with socket.io cors: corsOptions,   pingInterval: 1000,   connectionStateRecovery: {     maxDisconnectionDuration: 2 * 60 * 1000, // 2 mins backup skipMiddlewares: true,   }, }); 

without the setInterval...

0 Comments
2024/04/15
16:13 UTC

11

How to capture an exception in an unknown part of the code?

Hi! I'm working on a fairly complex express backend. Oftentimes, it can throw an error unexpectedly for any number of reasons. I usually try to code defensively and safeguard any 'dangerous' paths but I can't be writing try-catchs every other line. Besides, I don't have control over the code of the numerous libraries the system uses.

Can you think of a way of finding out the exact path the program took to arrive at that particular exception? Especially if I can link it to the request that triggered the code execution. I'm thinking stack traces but they're noisy. I would rather have a more direct way of tracking them, even if it starts with a stack trace.

I have already written a middleware that logs every request and response in detail, but I'm missing on catching the errors that might prevent the middleware from running, i.e., the ones that crash the app. Also, even if an error is not fatal, I don't really know which one of the requests generated it, unless I manually track it every time.

Thanks in advance for any input. I don't need a full implementation, just nudge me in the right direction if you've done this before

17 Comments
2024/04/15
15:45 UTC

1

What is the point of "./package.json": "./package.json" in package.json exports?

Edit - I think this only relates to subpaths.

-----------------------------------------------

I don't know the purpose of the following syntax in package.json export: -

"./package.json": "./package.json"

I noticed an error in my eslint when it is not present: -

Package subpath './lint/package.json' is not defined by "exports"

I can't find anything to explain the purpose of this. Can someone explain it to me? Thanks.

1 Comment
2024/04/15
14:45 UTC

0

Render a 3d scene using Node.js

Hey I was wondering if there is a good lib to use, so I can render an image of like a 3d cube or something that has a texture on it what I want to do i something like the image I added so what I want is giving in a texture(in this case a texture of a cape) and get an image like the one I added as an output

https://preview.redd.it/jk3vgt4d8nuc1.png?width=420&format=png&auto=webp&s=7a13864e251e6d1cd2ff8eea37516da622cb312c

1 Comment
2024/04/15
13:13 UTC

5

axios responseType stream vs node-fetch

Hi fellow programmers, So I have this exact code in my repo to get an image from a URL

const documentStream = await axios
           .get<Readable>("/getAwsImg/", {
               responseType: "stream",
           });
       return documentStream.data;

Now, I created an exact replica for this in node-fetch

const documentStream = await fetch("/getAwsImg/");
return documentStream.body;

Now, after this Im sending this to my third party server:

const formData = new FormData();
formData.append("imageDoc", imgDocumentStream);
const headers = formData.getHeaders();
const upload = await fetch(UPLOAD_URL, {
   body: formData,
   method: "POST",
   headers: { "content-type": headers["content-type"]}
});
return upload;

so, when i use axios, I get all the metadata like file type, mimetype, content-type etc. but when using node-fetch, all I get is content-type: octet-stream (which is wrong) I want to remove the dep of axios, but this doesn't seem to work. Any help or suggestions?

15 Comments
2024/04/15
11:26 UTC

10

Does anyone know why Chrome console allows accessing class private fields with "#" prefix outside the class and on its instance object.

Below code actually works on chrome console and does not throw exception (but it does throw exception on firefox and safari console).

"use strict"
class ClassWithPrivateField {
  #privateField = 42;
}

const instance = new ClassWithPrivateField();

// below line works ONLY in chrome but should throw an exception
console.log(instance.#privateField, "--can access private field outside class");

// outputs -> 42 --can access private field outside class

Anyone knows why? Is this a bug in Chrome?. The chrome version I have is 123.0.6312.123 (official build)

17 Comments
2024/04/15
10:34 UTC

0

Scrape Amazon reviews

I stumbled upon an excellent blog detailing how to scrape Amazon reviews using Node.js and JavaScript. It offers a step-by-step approach for extracting valuable data, ensuring a smooth process and providing crucial insights for market research and product development.

1 Comment
2024/04/15
10:03 UTC

18

Are there free or cheap option to host a Node.js Application

26 Comments
2024/04/15
06:35 UTC

1

How to send user's private messages in socket.io?

I'm making a simple React, Node web game with no database, Player objects last only for the duration of their session. When they disconnect, the socket and player SHOULD get removed at the same time.

So to send a user a separate message, I was planning on just setting the player ID as the socket.ID upon connection, and doing something like io.to(player.getID()).emit(...)
, where getID()
just returns the socket.id.

But I read online that socket.id is reset every time a connection or reconnection occurs. So if the user disconnects from wifi for a minute and reconnects (they should stay in the game), their socket will reset and generate a new socket ID? That means that io.to(player.getID()).emit(...)
wouldn't emit to the right socket, correct?

If that's the case, what's the best practice to send users private messages in socket.io in my scenario?

16 Comments
2024/04/14
22:09 UTC

0

I was using node, but, I was using win7 and the latest version of node a person can install is literally node 13, now it is up to node 20, a person loses a lot of developments in it,so, if a person runs win7, is there any way a person can run the node install where it will install anyway?

install node 20 on win7?

12 Comments
2024/04/14
20:52 UTC

3

NodeJS NestJS/Express - Creating Custom Exception Libraries for SDK integration?

Hey all,

Does anyone have any good examples or recommended best practices for extending SDKs in an express/nestjs environment? I come from primarily C#/netcore where creating extended error handling libraries for 3rd party SDKs is pretty normal.

What is the recommended best practices for this in an express (nestjs) setting? Should I just handle the response codes within the service classes and translate them to the native nestjs library? Create an extended class for the specific sdk codes?

Any examples of projects I could leverage?

0 Comments
2024/04/14
20:30 UTC

2

An API Gateway for frontend applications

Hey Everyone! I’ve recently been working on an api gateway to help frontend applications access third party APIs that require secret API keys without creating their own backend.

I would love to know what the community thinks and I’m open to any feedback!

For some further insight: it allows you to configure an express server using a yaml file where you can define endpoints that will proxy requests to the third party API. You can modify the requests to add secrets that should not be exposed on the frontend. You’re also able to define policies such as cors, jwt and api authentication as well as rate limiting.

Check it out here: https://github.com/gateweaver/gateweaver

0 Comments
2024/04/14
20:13 UTC

0

Otto is an autonomous software engineer that runs in the browser

Otto Engineer is an #autonomous #ai #coding #agent that runs in the browser and requires zero set up. It's free for anyone to try 🤗 In the attached demo, you can watch Otto create and run a dev server and send HTTP requests to it in order to test its work! 🤩 I ask Otto to create the BE for a todo list app in the form of an express server that supports CRUD operations and uses an in-memory DB via sqlite3. Otto sets up the libraries I want to use and the types for them, iterates through some TypeScript issues, implements the logic, and finds a way to test it. Then I download the fully working project in one click 😎

The goals for Otto are to 1) never get stuck - it's a ChatGPT that can fix its own errors and iterate on code, perfect for debugging through an issue autonomously 2) have a way to quickly scaffold out starter code and prototypes. 3) Automatically implement utilities and mini libraries from scratch. Its capabilities on all of these fronts have continued to expand and evolve rapidly over time!

Check out the interactive demo this video was created from here: https://app.arcade.software/share/Pjze2YOh6EQHqW0mZ4km

Try Otto here: https://otto.engineer/

If you're an engineer, I'd love to hear your thoughts on what you'd like AI to do for you! I'm having a lot of fun exploring what's possible here, and I'm excited and optimistic about the future of AI-driven software development

3 Comments
2024/04/14
19:32 UTC

2

Import works with node but not ts-node

I am working on setting up a custom migration workflow using the umzug npm package.

We have an internal config package that loads configuration from a .yaml file.

This is the script I have copied over.

The code is almost the same, except for the getRawSqlClient function where I call the method to create a connection to a Postgresql database and return an interface to run queries.

I have copied the script in a .ts file and run it using ts-node.

It seems the method to create a database connection cannot load the config.

However, when I change the file from a .ts to a .js, everything seems to work as expected.

I have a feeling this has to do with ts-node compiler options but I am not sure.

I would have loved to copy/paste the code but cannot due to employer NDA policy.

Can someone give me a few pointers?

1 Comment
2024/04/14
18:56 UTC

11

CLI tool for creating a backend with Node.js

Hello everyone! Not long ago, I started creating an NPM package for generating the server-side of Node.js. I would appreciate it if you could evaluate the result and possibly suggest some ideas for the package's development. Essentially, this package is a CLI utility. You'll need to answer a couple of questions, and it will generate fully functional code. Welcome aboard! GIT.

https://www.npmjs.com/package/awesome-backend

6 Comments
2024/04/14
17:04 UTC

0

C# vs NodeJs.

What to learn for backend development in 2024 ?

37 Comments
2024/04/14
14:27 UTC

1

Problem with nodemailer library

The nodemailer library on localhost works without any problems, but on cpanel hosting it does not work. Why?

5 Comments
2024/04/14
13:33 UTC

3

Writing Node.js linked module using napi-rs

Recently, I spent some time integrating napi-rs into the JSAR runtime project, and I wanted to share my experience with readers as a reference.

Core concepts

First, the article will mention 3 key concepts:

  1. napi-rs
  2. Node.js Embedder API
  3. Node.js linked module

First, with napi-rs, developers can write Node.js N-API tools in Rust with concise and clear syntax, making the development process more straightforward."

/// import the preludes
use napi::bindgen_prelude::*;
use napi_derive::napi;

/// module registration is done by the runtime, no need to explicitly do it now.
#[napi]
fn fibonacci(n: u32) -> u32 {
  match n {
    1 | 2 => 1,
    _ => fibonacci(n - 1) + fibonacci(n - 2),
  }
}

Then, as for the Node.js Embedder API. The developers typically use the official distribution package directly, installing it and then using the node command. For example:

node script.mjs

Except that, Node.js also provides a way to embed Node.js scripts into other processes. You can find more details in the documentation under “C++ embedder API”. Developers can achieve embedded script execution using the C++ API provided by Node.js. The JSAR runtime, on the other hand, operates on XR devices as a Web-like runtime environment based on Node.js Embedder API. It implements WebXR and WebGL API on top of Node.js.

Even for developers accustomed to writing N-API modules, Linked Module may seem unfamiliar. Typically, developers package N-API modules into .node files using tools like GYP or other packaging tools for use. Linked Module, on the other hand, is used in conjunction with the Node.js Embedder API.

It’s like any other built-in module in Node.js, directly integrated into libnode.so. As the name suggests, it provides Native capabilities that can be utilized by the Node.js Embedder. JSAR, as a Node.js Embedder runtime, utilizes Linked Module to interface with WebGL and WebGL.

Therefore, after understanding these three terms, readers should have a clear understanding of what I aim to achieve, which is enabling the use of napi-rs within the Node.js Embedder.

The way to resolve

The first thing to do in defining a linked module is to complete two steps. The first step is to define the corresponding napi_module structure, and the second step is to call the node::AddLinkedBinding
function to add the module to Node.js.

Let’s take a look at the properties of the napi_module structure:

typedef struct napi_module {
  int nm_version;
  unsigned int nm_flags;
  const char* nm_filename;
  napi_addon_register_func nm_register_func;
  const char* nm_modname;
  void* nm_priv;
  void* reserved[4];
} napi_module;

The main focus is on the following three fields:

  • nm_flags: For linked modules, set to node::ModuleFlags::kLinked so that Node.js knows this is an N-API module.
  • nm_register_func: The initialization entry function of the N-API module, which is called when the module is imported.
  • nm_modname: Module name, used to find the corresponding module when imported in JavaScript by name.

Next is adding the module at runtime, which is relatively simple:

AddLinkedBinding(env, jsar_webgl_napi_mod);
AddLinkedBinding(env, jsar_webxr_napi_mod);

Therefore, to create linked modules using napi-rs, the key is how to obtain nm_register_func in the C/C++ environment.

Implementing this project may be a bit tricky, but the principle is not complicated:

  1. Enable the noop feature of napi-rs so that the generated napi_register_module_v1 function is removed.
  2. Define nm_register_func in Rust and declare pub extern "C" fn foo_module(env: napi_env, exports: napi_value) -> napi_value.
  3. By default, napi-rs compiles into a dynamic library. However, since linked modules need to be linked with the libnode library, you need to modify the build options in Cargo.toml to create a static library and link napi-rs's static library files in the final project.
  4. Add the corresponding header file, then define napi_module and call node::AddLinkedBinding() to complete the process.

Implementation for engineering

Now that we understand the principles, let’s move on to the implementation for engineering.

The following implementation is not complete, as some non-critical parts have been omitted. 

First, let’s see how a linked module is ultimately defined in an napi-rs module:

use jsar_jsbinding_macro;
use napi_derive;
use napi::{CallContext, Env, JsNumber, JsObject, NapiValue, Result};

#[linked_module(canvas)]
fn init_canvas_module(env: Env, mut exports: JsObject) -> Result<()> {
  // ignored Ok(())
}

It can be observed that I created a new package called jsar_jsbinding_macro and provided a macro #[linked_module(name)], which is similar to the built-in #[module_exports] functionality in napi-rs.

jsar_jsbinding_macro

The code:

#[macro_use]
extern crate syn;
#[macro_use]
extern crate quote;

use proc_macro::TokenStream;
use syn::{parse_macro_input, ItemFn};

#[proc_macro_attribute]
pub fn linked_module(attr: TokenStream, input: TokenStream) -> TokenStream {
  let input = parse_macro_input!(input as ItemFn);
  let fn_impl = quote! { #input };
  let fn_name = input.sig.ident;

  // get the first attr as the entry name
  let attr = parse_macro_input!(attr as syn::Ident);
  let entry_name = format_ident!("jsbinding_{}", attr);

  (quote! {
    #[inline]
    #fn_impl

    #[no_mangle]
    pub extern "C" fn #entry_name(
      raw_env: napi::sys::napi_env,
      raw_exports: napi::sys::napi_value,
    ) -> napi::sys::napi_value {
      unsafe {
        let env = napi::Env::from_raw(raw_env);
        let exports = napi::JsObject::from_raw_unchecked(raw_env, raw_exports);
        if let Err(e) = #fn_name(env, exports) {
          napi::JsError::from(e).throw_into(raw_env);
        }
      }
      raw_exports
    }
  })
  .into()
}

It’s evident that the purpose of this macro is to add corresponding C export functions for the functions defined with #[linked_module(name)]. I've prefixed the function names with "jsbinding_". For example, linked_module(canvas) will generate a jsbinding_canvas() function. Inside this function, Node.js's env and exports pointers passed from C/C++ are converted into napi_sys
's Env and JsObject objects, and then the functions defined below the macro are called.

N-API Module’s Cargo.toml

In this context, the Cargo.toml refers to the package where the specific module logic is implemented using napi-rs.

[package]
name = "your_package"
version = "1.0.0"
license = "Apache-2.0"
edition = "2021"
publish = false

[lib]
name = "your_library_name"
crate-type = ["staticlib"]
path = "lib.rs"

[features]
experimental = []

[dependencies]
jsar_jsbinding_macro = { path = "../jsbinding_macro" }
napi = { version = "2", features = ["compat-mode", "noop"] }
napi-derive = { version = "2", features = ["compat-mode", "noop"] }

[build-dependencies]
napi-build = "2"

Make sure to add noop and compat-mode features, and set crate-type to staticlib.

Next, let’s look at the build.rs:

use std::env;
use std::fs;
use std::io::Error;
use std::path::Path;

extern crate napi_build;

fn main() {
  napi_build::setup();
} 

It’s consistent with the definition in napi-rs itself.

C/C++ Part

the compilation part is omitted. Just remember to add the static library linkage.

Let’s start from the header:

#include <node/node.h>
#include <node/node_api.h>

extern "C"
{
    napi_value jsbinding_webgl(napi_env env, napi_value exports);
    napi_value jsbinding_webxr(napi_env env, napi_value exports);
} // extern "C"

#define ADD_RS_NODE_MODULE(varname, modname)              \
    static napi_module jsar_##varname##_napi_mod = {      \
        NAPI_MODULE_VERSION,                              \
        node::ModuleFlags::kLinked,                       \
        nullptr,                                          \
        jsbinding_##varname,                              \
        modname,                                          \
        nullptr,                                          \
        {0},                                              \
    };

ADD_RS_NODE_MODULE(webgl, "jsar:webgl")
ADD_RS_NODE_MODULE(webxr, "jsar:webxr")
#undef ADD_RS_NODE_MODULE

First, import the Node.js C/C++ header files. Then define the extern section, where the functions are consistent with the exported functions defined in Rust.

If you encounter linking errors where symbols cannot be found, on macOS you can use nm

to check if the global symbols exist in the .a file.

Lastly, a C/C++ macro ADD_RS_NODE_MODULE() is defined to facilitate declaring the module.

The final step is straightforward:

AddLinkedBinding(env, jsar_webgl_napi_mod);
AddLinkedBinding(env, jsar_webxr_napi_mod);

The end

With that, the tutorial on writing linked modules using napi-rs and using them in a Node.js Embedder comes to an end. 

I highly encourage interested readers to port this to the napi-rs community (as I'm too lazy and busy myself). For such a niche usage like linked modules and Node.js Embedder, providing more direct help to future developers is commendable.

If you encounter any problems during the porting process, feel free to reach out to me(@yorkie) for discussion.

0 Comments
2024/04/14
13:19 UTC

Back To Top