/r/mongodb

Photograph via snooOG

News, articles, and interesting stuff in general about MongoDB (unofficial).

/r/mongodb

21,922 Subscribers

1

Changing admin passsword

Hi all, I had a small project developed by an agency recently that included an AWS hosted mongoDB instance.

As part of the handover, they shared the creds they used during set up.

Now that we’re done, I want to tighten up the security and change the admin PW

Can someone point me to a useful guide?

I’m technically inclined, but not a dev so some of the CLLI steps I saw online went over my head.

Thanks

0 Comments
2024/05/02
18:56 UTC

2

Migrating from GridFS to Azure (ADLS Gen2) Storage

I'm a data engineer but not a MongoDB expert. We are using GridFS for PDF images and the volume is large. We want to take data beyond a certain threshold and archive it then purge it from our active system to reduce the volume in our production system. Our path so far involves using mongoexport to grab _id values based on uploadDate, exporting to a csv, then iterating through these mongofiles delete_id. Prior to purging, we export. We're also tasked with having a recovery process in case we need to go back and grab old data so we're on the path of setting up a disaster recovery standby environment with this historical data.

I'm thinking there is a better way. We use Microsoft Azure and have tons of storage available. We cannot change the GridFS structure for recent data because our primary application for images needs quick access. I'm wondering if we can instead somehow export these images, store in Azure storage, and change the GridFs reference for old data so that instead of purging it completely--we simply remove it from GridFS and change the reference to point to Azure storage.

The live system would have a mix/match of some recent data using GridFS and old data using Azure Storage. I can't find any good code examples of how to do this. Is there a code snippet that shows how to change a given _id to point to Azure instead of GridFS (while other _id values keep using GridFS)? What would be the best practice to export out this data, move to Azure, and change the references to the new storage?

Thanks and sorry I'm such a newb.

1 Comment
2024/05/02
14:22 UTC

4

NYC .Local Keynotes - Live Stream!!

0 Comments
2024/05/02
13:58 UTC

1

Mongodb: add and remove authentication mechanisms in existing accounts?

Hi,

I have mongodb 7.0.7 install on rhel. A user using SCRAM-SHA-1, SCRAM-SHA-256 .

I suspect our app does not support one of these.

How can I remove, and add, auth mechanisms from existing account?

Cheers!

3 Comments
2024/05/02
10:10 UTC

1

In a Mongodb Trigger function how to use ObjectID

Without ObjectID my trigger function won't get the ID in my database, which I know is 100% correct. The error I'm getting when I try to import it from mongoose is:

MongoDB Node.js Driver is not supported. Please visit our Atlas App Services documentation.

Here's how the call is being made: await userCollection.updateOne(
                { "_id": mongoose.ObjectID(videos_user_id) },
                { $inc: { amountOfVideos: 1 } }
            );
0 Comments
2024/05/02
03:08 UTC

1

Log/query/index analysis

Hi all,

I had a couple of log analyzers last year that the MongoDB professional services team shared with me. Unfortunately I'm at a new job now and need those tools again. I swear one was called Mongolyzer and at the time I downloaded it I found it easy but can't find it anymore. The other one required a download of the logs from atlas and using a CLI to run the analysis, but then it generated a PDF report. It was handy for slow query trend analysis, missing indexes, etc. Anyone have any ideas? IIRC one of these tools came from an article written by a mongodb employee but that's all I can remember.

3 Comments
2024/05/01
23:10 UTC

1

Firebase Auth and Mongo DB

I have a Fast API backend that has jwt set up successfully. I have set up jwt auth on my front end, but later realized that I need to use firebase auth as well to set up a messaging feature. Ideally users can login using firebase then firebase with send the accessToken to mongodb for verification. How do you suggest I go about integrating firebase auth?

Should I take the jwt auth and then build the firebase auth in? Can I integrate it into what I already have?

0 Comments
2024/05/01
20:58 UTC

0

imported csv to Mongo now i can't read it

1 Comment
2024/05/01
10:26 UTC

1

Self Hosted Mongo Db

I have a self hosted mongo db on a 32 gb VM. Recently , the machine has been restarting due to max RAM usage. This is happening like once a day and I don't know how or where to start looking.

I'll appreciate any direction or ideas

1 Comment
2024/05/01
08:38 UTC

1

Help convert aggregation pipeline to Spring boot code

Hi everyone, I need help converting this aggregation pipeline to something I can use in my Springboot application.

Here's a sample document

{ 
"_id": { "$oid": "661f5d829e690577b3c9da38" },
 "title": "3-bedroom aparement at Kerpen", 
"description": "Newly built two bedrooms apartment at Kerpen", 
"furnishing": "Furnished.", 
"position": "Close to the city center", 
"address": {
 "street_name": "Brabanter street", 
"house_number": "23", 
"city": "Kerpen", 
"state": "Cologne", 
}, 
"available": true, 
"agent_id": 1, 
"available_from": { 
"$date": "2024-05-30T00:00:00.000Z" 
}, 
"cost": { 
"annual_rent": 250000, 
"agent_fee": 20000, 
"caution_fee": 25000 
}, 
"facility_quality": "NORMAL", 
"pets_allowed": "YES", 
"apartment_info": {
 "room_nums": 4, 
"bathroom_nums": 2, 
"bedroom_nums": 3, 
"apartment_type": "APARTMENT"
 }, 
"application_docs": [ [ "Proof of income" ], [ "Electricity bill" ] ], 
"apartment_images": [ [ "https://www.pexels.com/photo/17977592/" ], [ "https://www.pexels.com/photo/17986629/" ] ], 
"created_at": { 
"$date": "2024-04-17T05:26:26.510Z"
 },
 "updated_at": { "$date": "2024-04-17T05:26:26.510Z" }, 
"_class": "com.myhome.homeAlone.listing.Listing" }

I'm trying to group the listings by year and month such that

  1. I'll get the year a listing was made
  2. The total number of listings made in a specific year
  3. The month a listing was made and how many listings was made for a given month
  4. The months are given in number so they're matched to their corresponding month

Here's a sample response:

[
{
  "year": 2023,
  "totalCount": 10,
  "monthlyCounts": [
    {
      "month": "July",
      "count": 6
    },
{
      "month": "September",
      "count": 4
    }
  ]
},
{
  "year": 2021,
  "totalCount": 1,
  "monthlyCounts": [
    {
      "month": "January",
      "count": 1
    }
  ]
},
{
  "year": 2024,
  "totalCount": 2,
  "monthlyCounts": [
    {
      "month": "April",
      "count": 2
    }
  ]
}

]

Here's the aggregation pipeline which gave the result above

[
  {
    $project: {
      year: {
        $year: "$created_at",
      },
      month: {
        $month: "$created_at",
      },
      monthNum: {
        $month: "$created_at",
      },
    },
  },
  {
    $group: {
      _id: {
        year: "$year",
        month: "$monthNum", 
      },
      totalCount: {
        $sum: 1,
      },
    },
  },
  {
    $group: {
      _id: "$_id.year",
      monthlyCounts: {
        $push: {
          month: {
            $switch: {
              branches: [
                {
                  case: {
                    $eq: ["$_id.month", 1],
                  },
                  then: "January",
                },
                {
                  case: {
                    $eq: ["$_id.month", 2],
                  },
                  then: "February",
                },
                {
                  case: {
                    $eq: ["$_id.month", 3],
                  },
                  then: "March",
                },
                {
                  case: {
                    $eq: ["$_id.month", 4],
                  },
                  then: "April",
                },
                {
                  case: {
                    $eq: ["$_id.month", 5],
                  },
                  then: "May",
                },
                {
                  case: {
                    $eq: ["$_id.month", 6],
                  },
                  then: "June",
                },
                {
                  case: {
                    $eq: ["$_id.month", 7],
                  },
                  then: "July",
                },
                {
                  case: {
                    $eq: ["$_id.month", 8],
                  },
                  then: "August",
                },
                {
                  case: {
                    $eq: ["$_id.month", 9],
                  },
                  then: "September",
                },
                {
                  case: {
                    $eq: ["$_id.month", 10],
                  },
                  then: "October",
                },
                {
                  case: {
                    $eq: ["$_id.month", 11],
                  },
                  then: "November",
                },
                {
                  case: {
                    $eq: ["$_id.month", 22],
                  },
                  then: "December",
                },
              ],
              default: "Unknown",
            },
          },
          count: "$totalCount",
        },
      },
      totalCount: {
        $sum: "$totalCount",
      },
    },
  },
  {
    $project: {
      _id: 0,
      year: "$_id",
      totalCount: "$totalCount",
      monthlyCounts: "$monthlyCounts",
    },
  },
]

I'm stuck converting the pipeline to something I can use in spring boot. This stage is what I'm having difficult with

{
  _id: "$_id.year",
  // Group by year only
  monthlyCounts: {
    $push: {
      month: {
        $switch: {
          branches: [
            {
              case: {
                $eq: ["$_id.month", 1],
              },
              then: "January",
            },
            {
              case: {
                $eq: ["$_id.month", 2],
              },
              then: "February",
            },
            {
              case: {
                $eq: ["$_id.month", 3],
              },
              then: "March",
            },
            {
              case: {
                $eq: ["$_id.month", 4],
              },
              then: "April",
            },
            {
              case: {
                $eq: ["$_id.month", 5],
              },
              then: "May",
            },
            {
              case: {
                $eq: ["$_id.month", 6],
              },
              then: "June",
            },
            {
              case: {
                $eq: ["$_id.month", 7],
              },
              then: "July",
            },
            {
              case: {
                $eq: ["$_id.month", 8],
              },
              then: "August",
            },
            {
              case: {
                $eq: ["$_id.month", 9],
              },
              then: "September",
            },
            {
              case: {
                $eq: ["$_id.month", 10],
              },
              then: "October",
            },
            {
              case: {
                $eq: ["$_id.month", 11],
              },
              then: "November",
            },
            {
              case: {
                $eq: ["$_id.month", 22],
              },
              then: "December",
            },
          ],
          default: "Unknown",
        },
      },
      count: "$totalCount",
    },
  },
  totalCount: {
    $sum: "$totalCount",
  },
}
0 Comments
2024/05/01
06:56 UTC

1

Cluster version 7.0.9 backups broken

MongoDB Atlas has upgraded some of our cluster version from 7.0.8 to 7.0.9 and now the backups are failing.
Is this happening to anyone else?

0 Comments
2024/05/01
03:55 UTC

1

How do I return an ID value in Python?

Example Document:

{"_id":{"$numberInt":"2"},"contract_id":{"$numberInt":"2"},"name":"Machinery","price":{"$numberInt":"235899"},"time_inserted":{"$date":{"$numberLong":"1714308202190"}},"status":"available"}

After Googling far and wide I have hit many dead ends... hoping someone can help me out. I am running on Atlas with Python 3.12 (newest everything).

All I want to do is grab contract_id so that for the next item that gets inserted into the database it can have a unique contract_id. I guess I could do a random number, but eventually if the same contract is inserted, I would want to use that same contract_id again.

Open to thoughts.

5 Comments
2024/04/30
04:37 UTC

1

[GenerativeAI] Is mongodb good for my use-case? Comparing with weaviate

Hi!

I am working on a recommendation system using LLM embeddings and I’m looking for the right database for my use-case.

I have put together a set of requirements with what I investigated on how I can fulfill them using this database, and thought of coming here to check if someone with more experience with it can help me to know if this makes sense, if I’m overlooking something, etc.

I don’t see having to support more than 500 records and maybe 100 requests per day in the mid-term, so I don’t need something with great optimizations or scaling options, but of course the cheaper the better.

So far, these are my requirements and what I have found in the docs:

  • I must be able to store n>=1 vector embeddings per ID OR I must be able to store 1 very large vector embedding per IDYES
  • I must be able to store and retrieve metadata: YES, because vectors are stored as any other document
  • I must be able to do pre-filtering based on metadataYES
  • I must be able to do database migrations (i.e. add/remove columns to each table): YES and I can do that with vectors too because they are stored as any other property in my collections
  • (Highly desirable) I want a good ts (or js) client: YES. I can use mongodb, mongoose or prisma
  • (Desirable) I want to do pagination after pre-filtered queries OR (Required) I must be able to retrieve every result: YES, but as I don’t expect to have that many records I am thinking of just storing the rank of every result in a separate collection and querying that directly.

To be honest, I agree with the benefits of vector search with MongoDB listed in their website, but the starting price for dedicated clusters imo is too high and vector search is not available in serverless mode. Also, I find very confusing the pricing page. For instance:

  • If I start with a shared free cluster, how does the vector search nodes costs relate ($0.11/hr for an S30 cluster)?
  • Same question, but if I start with a dedicated M10 cluster.
  • What are “vector search nodes” anyway?

One other “con” is that doing stuff like hybrid search is considerably more complex than in weaviate.

Also, for reference, here is a similar post that I wrote in Weavaite's forum with my investigation.

1 Comment
2024/04/29
23:42 UTC

1

Self-Signed SSL Problem

Hi, I'm trying to install self-signed SSL certificates to my MongoDB, ,

However, I'm getting this error while connecting: 'SSL peer certificate validation failed: self signed certificate'

What should I do?

3 Comments
2024/04/29
14:39 UTC

0

Cordially, fuck mongoDB

8 Comments
2024/04/29
12:16 UTC

2

How to perform update in a nested array element in spring data monogo

I am trying to create a Expense tracker application using SpringBoot, MonogDB and ReactJS

I have a collection called "userexpense" which each record has three fields "id","email" and "expenses"

{
  "id":"123",
  "email":"example@gmail.com",
  "expenses":[
     {
          "taskid":"xyz",
         "amount" : "90",
        "description : "vada pav",
         "category" : "food"
     },
    {
          "taskid":"qpr", "amount" : "900","description : "train","category" : "transport"
     }
 ]
}

"expenses" is an array of objects which holds expenses of individual users. I want to perform update operation on element of expenses array. updates like changing the amount and description of a particular expense based its taskid.

How can i achieve this using MongoTemplate in spring data mongo

3 Comments
2024/04/29
07:46 UTC

4

Best Practice for Secured MongoDB?

Is there a document on how to secure the content of MongoDB such that only authenticated software modules can read the content? I am a software developer for a scientific instrument appliance. We have a lot of IP stored in the MongoDB used in the instrument appliance. I have been tasked to protect the content, in addition to our legal contracts.

My assumption is that the root password of the Linux OS can be compromised. So hackers can gain access to the OS as root. They can insert their own software modules to hack the data. So I have been looking into TPM of the motherboard, MongoDB's encryption at rest, and HSM based protection.

I realized that others must have accomplished the same goals already. So I am wondering if someone can point me to the resources for such tasks. It is assumed that attackers/hackers will have access to the MongoDB since it is an appliance product.

6 Comments
2024/04/28
21:34 UTC

1

import a document

Hey, im trying to import a document ".json" into my db so i used this in "mcd" of my windowns access to program files/mongo/bin but i keep seeing everyone is accesing in " program files/mongo/server/numb of version/bin i also typed after this

mongoimport --db mydb --collection mycollection --file C:\Users\Me\Desktop\dossier\name.json

0 Comments
2024/04/28
20:39 UTC

3

Is there a way to delete and get the deleted document( or the inverse) in a single connection to the DB

2 Comments
2024/04/28
17:39 UTC

3

What's your thoughts on MongoDB Atlas Search?

I'm using Atlas' managed MongoDB and I love it, it's easy and simple and scalable, I now saw they have a service called "MongoDB Atlas Search" which is a way to perform full text search with scoring (and more) similar to ElasticSearch but without the headache of ETL/syncing ElasticSearch with mongo..

Anyone uses this service and can share their opinion? (I'm using NodeJS for my BE)

I saw a bunch of tutorials on their official YT channel but they all seem to create functions and indexes on the Atlas web UI before being able to use it in their FE, this is not ideal for me as I must keep all my schemas and configurations in my code, is there a way to keep all the logic of creating indexing in my code?, similar to how you can use mongoose to help you have consistent schema for you collections?

Thanks in advance :)

3 Comments
2024/04/28
14:35 UTC

6

Natural language to MongoDB query conversion

I am excited to release the next iteration of my side project 'nl2query', this time a fine tuned Phi2 model to convert natural language input to corresponding Mongodb queries. The previous CodeT5+ model was not robust enough to handle the nested fields (like arrays and objects), but the Phi2 is. Explore the code on GitHub: https://github.com/Chirayu-Tripathi/nl2query.

https://preview.redd.it/ik55qojcq5xc1.png?width=1800&format=png&auto=webp&s=d4ad68b600abd909404a066ebbfee3aee41db732

2 Comments
2024/04/28
05:37 UTC

1

Improving Performance of MongoDB Query for Counting Contacts in a Group

I'm encountering performance issues with a MongoDB query used to count contacts belonging to specific contact groups. Initially, this approach worked well with a small dataset, but as the number of contacts in the collection has scaled to over 800k documents, the query's execution time has become prohibitively slow (approximately 16-25 seconds).

Database Schema:

Schema for Contact:

const Contact = new mongoose.Schema(
  {
    name: String,
    email: { type: String, required: true },
    user: { type: mongoose.Schema.Types.ObjectId, ref: "User" },
    groups: [{ type: mongoose.Schema.Types.ObjectId, ref: "ContactGroup" }]
  },
  { timestamps: true }
);

Schema for ContactGroup:

const ContactGroup = new mongoose.Schema(
  {
    title: { type: String, required: true },
    description: { type: String, default: "" },
    user: { type: mongoose.Schema.Types.ObjectId, ref: "User" },
  },
  { timestamps: true }
);

The query I'm running:

const countdocs = async (query) => {
  return Contact.countDocuments(query);
};

const dt = await countdocs({
  $expr: {
    $in: [mongoose.Types.ObjectId(group._id), "$groups"]
  }
});

I've attempted to create an index on the groups field in the Contact collection, but the query's performance remains suboptimal. Could anyone suggest alternative approaches or optimizations to improve the query's execution time? Additionally, I'm open to feedback on the current data modeling and indexing strategies.

Any assistance or insights would be greatly appreciated. Thank you!

4 Comments
2024/04/27
15:46 UTC

1

Is there a way to make this query more optimal?

I have two MongoDB collections `Contact` and `ContactGroup`. In a contact document, there's a field called `groups` which stores the Object IDs of a contact group (`ContactGroup`) in an array to easily allow querying for all contacts that belong to a specific contact group. Now the issue is with a small number of documents in the database this modeling worked fine but as the contacts in the `Contact` collection as scaled to over 800k documents running a query to count all contacts that belong to a contact group is very slow which is roughly about 16-25s. What is a more optimal way to go about this?

This is the query I'm running:

```

const countdocs = async (query) => {

return Contact.countDocuments(query);

};

const dt = await countdocs({

$expr: {

$in: [mongoose.Types.ObjectId(group._id), "$groups"]

}

});

```

Here's the schema for `Contact`:

```

const Contact = new mongoose.Schema(

{

name: {

type: String,

},

email: {

type: String,

required: true,

},

user: {

type: mongoose.Schema.Types.ObjectId,

ref: "User",

},

groups: {

type: [

{

type: mongoose.Schema.Types.ObjectId,

ref: "ContactGroup",

},

],

default: [],

},

},

{ timestamps: true }

);

```

Here's the schema for `ContactGroup`:

```

const ContactGroup = new mongoose.Schema(

{

title: {

type: String,

required: true,

},

description: {

type: String,

default: "",

},

user: {

type: mongoose.Schema.Types.ObjectId,

ref: "User",

},

},

{ timestamps: true }

);

```

I've tried creating an index on the `groups` field but that also didn't make the query more optimal.

1 Comment
2024/04/27
13:42 UTC

4

aggregate or find

I know this is a very broad discussion but I have a case where I need to know which is more performant.

user: {
    username: string;
    email: string;
    password: string;
}
tier: {
    name: string;
    price: number;
    description: string;
    userId: ref.User
}
tweets: {
    content: string;
    userId: ref.User;
    tiers: [ref.Tier]
}
subscription: {
    subscriberId: ref.User;
    targetId: ref.User;
    tierId: ref.Tier;
}

Now let's say I'm in the page /username, and I want to get all the tweets of a user, that would include all the tweets that my current subscription to that user includes, as well as the tweets that don't have a tier (considered as free or public tweets).
I currently have this code for pulling what I need:

const subscribedToUser = await UserModel.findOne({ username });
const subscribedToUserId = subscribedToUser._id;

const subscriptionTweets = await SubscriptionModel.aggregate([
    {
      $match: {
        subscriberId: new ObjectId(subscriberId),
        targetId: subscribedToUserId,
      },
    },
    {
      $lookup: {
        from: "tiers",
        localField: "tierId",
        foreignField: "_id",
        as: "tierDetails",
      },
    },
    { $unwind: { path: "$tierDetails", preserveNullAndEmptyArrays: true } },
    {
      $lookup: {
        from: "tweets",
        let: { subscribedTiers: "$tierDetails._id" },
        pipeline: [
          {
            $match: {
              $expr: {
                $and: [
                  {
                    $or: [
                      { $eq: [{ $size: "$tiers" }, 0] },
                      { $in: ["$$subscribedTiers", "$tiers"] },
                    ],
                  },
                  {
                    $eq: ["$userId", subscribedToUserId],
                  },
                ],
              },
            },
          },
        ],
        as: "subscribedTweets",
      },
    },
    { $sort: { "subscribedTweets.createdAt": -1 } },
    { $limit: 10 },
  ]);

My problem with this is, that I can use this only for getting the tweets of a user I'm subscribed to, but I wanted to use this also as a way to get the free tweets if I'm not subscribed.

Is this possible, and also I'm considering using multi find functions over this aggregate one, which one is better in this case,

Thanks in advance.

5 Comments
2024/04/27
10:29 UTC

1

Indexing concern

In mongo indexing they say ordering of index should be equality first then sort and then range columns. say my index is {a:1,b:-1.c:1} and im sorting on column b in desc order, where a is an equality column, b is a sort column and c is a range column.

I understand range columns do a full scan. If equality is first then it will return a less amount of documents to scan. then if i apply sort operation on b ill get the records in the desc order(since all the returned documents can be mapped to index b which is in desc order).

My doubt is why is sort column before range column in index (like why is that recommended) and how does not doing that cause in memory sort. Because if my index is {a:1, c:1, b: -1} then the equality column a will return the selected documents to scan and then the range query will anways scan these selected documents(which still happens in the prev case) and all those records would be mapped to the indexed column b and there is no need for in-memory sort right. but they say this will cause in memory sort

1 Comment
2024/04/27
10:19 UTC

1

Working with timezones and $dateTrunc

I am confused about how $dateTrunc works. For example, consider the following:

ISODate("2024-04-24T01:00:00Z")

$dateTrunc: {
  date: "$date",
  unit: "hour",
  binSize: 2,
  timezone: "+02:00",
  startOfWeek: "Monday"
}  

In this case, I get the result: 2024-04-24T00:00:00Z, which is correct. However, when I use the same input and corresponding timezone:

$dateTrunc: {
  date: "$date",
  unit: "hour",
  binSize: 2,
  timezone: "Europe/Warsaw",
  startOfWeek: "Monday"
}  

I get 2024-04-24T01:00:00Z (where I should get the same result).

What is happening?

0 Comments
2024/04/26
21:19 UTC

1

Encountering "No Capacity" Error When Upgrading from M0 to M10 Cluster in MongoDB

Hi everyone,

I'm currently facing an issue with MongoDB Atlas while attempting to upgrade my database from a free M0 cluster to a paid M10 cluster. Despite following the usual upgrade procedures, I keep running into a "no capacity" error. This error has halted the upgrade process, and I'm unsure how to proceed.

  • Are there specific strategies to mitigate this issue, or alternative approaches I should consider?
  • Any advice on checking and ensuring regional capacities, or should I consider switching regions?
  • Has anyone else encountered this "no capacity" error while upgrading?
  • Any insights or suggestions would be greatly appreciated!

https://preview.redd.it/gw1eieu67uwc1.png?width=1158&format=png&auto=webp&s=f5be508cea3d39dd1a17e228a133a4d1ce253ff0

6 Comments
2024/04/26
14:48 UTC

2

Unable to check featureCompatibilityVersion of standalone 3.4.24 database

Planning to upgrade an old 3.4.24 standalone database, one step at a time. One of the requirements listed for upgrading to 3.6 is that the featureCompatibilityVersion is set to 3.4

Runningdb.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } ) with my admin user returns "errmsg" : "not authorized on admin to execute command { getParameter: 1.0, featureCompatibilityVersion: 1.0 }"

Googling this issue returns pretty much nothing, I guess this isn't supposed to happen. I haven't been able to find specifically what role a user needs to have to check the featureCompatibilityVersion.

I tried adding the dbAdmin role for the admin db to user, but still get the same error.

Any ideas?

----------------------- Solved ---------------------------

The necessary permissions are under cluster administration roles. User needs at minimum the clusterMonitor role to use getParameter

4 Comments
2024/04/25
18:10 UTC

Back To Top