some progress

This commit is contained in:
Jonas_Jones 2023-03-30 20:40:42 +02:00
parent aea93a5527
commit e3c15bd288
1388 changed files with 306946 additions and 68323 deletions

201
node_modules/mongodb/LICENSE.md generated vendored Normal file
View file

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

295
node_modules/mongodb/README.md generated vendored Normal file
View file

@ -0,0 +1,295 @@
# MongoDB NodeJS Driver
The official [MongoDB](https://www.mongodb.com/) driver for Node.js.
**Upgrading to version 5? Take a look at our [upgrade guide here](https://github.com/mongodb/node-mongodb-native/blob/HEAD/etc/notes/CHANGES_5.0.0.md)!**
## Quick Links
| what | where |
| ------------- | ----------------------------------------------------------------------------------------------------------------- |
| documentation | [www.mongodb.com/docs/drivers/node](https://www.mongodb.com/docs/drivers/node) |
| api-doc | [mongodb.github.io/node-mongodb-native](https://mongodb.github.io/node-mongodb-native) |
| npm package | [www.npmjs.com/package/mongodb](https://www.npmjs.com/package/mongodb) |
| source | [github.com/mongodb/node-mongodb-native](https://github.com/mongodb/node-mongodb-native) |
| mongodb | [www.mongodb.com](https://www.mongodb.com) |
| changelog | [HISTORY.md](https://github.com/mongodb/node-mongodb-native/blob/HEAD/HISTORY.md) |
| upgrade to v5 | [etc/notes/CHANGES_5.0.0.md](https://github.com/mongodb/node-mongodb-native/blob/HEAD/etc/notes/CHANGES_5.0.0.md) |
| contributing | [CONTRIBUTING.md](https://github.com/mongodb/node-mongodb-native/blob/HEAD/CONTRIBUTING.md) |
### Bugs / Feature Requests
Think youve found a bug? Want to see a new feature in `node-mongodb-native`? Please open a
case in our issue management tool, JIRA:
- Create an account and login [jira.mongodb.org](https://jira.mongodb.org).
- Navigate to the NODE project [jira.mongodb.org/browse/NODE](https://jira.mongodb.org/browse/NODE).
- Click **Create Issue** - Please provide as much information as possible about the issue type and how to reproduce it.
Bug reports in JIRA for all driver projects (i.e. NODE, PYTHON, CSHARP, JAVA) and the
Core Server (i.e. SERVER) project are **public**.
### Support / Feedback
For issues with, questions about, or feedback for the Node.js driver, please look into our [support channels](https://docs.mongodb.com/manual/support). Please do not email any of the driver developers directly with issues or questions - you're more likely to get an answer on the [MongoDB Community Forums](https://community.mongodb.com/tags/c/drivers-odms-connectors/7/node-js-driver).
### Change Log
Change history can be found in [`HISTORY.md`](https://github.com/mongodb/node-mongodb-native/blob/HEAD/HISTORY.md).
### Compatibility
For version compatibility matrices, please refer to the following links:
- [MongoDB](https://docs.mongodb.com/drivers/node/current/compatibility/#mongodb-compatibility)
- [NodeJS](https://docs.mongodb.com/drivers/node/current/compatibility/#language-compatibility)
#### Typescript Version
We recommend using the latest version of typescript, however we currently ensure the driver's public types compile against `typescript@4.1.6`.
This is the lowest typescript version guaranteed to work with our driver: older versions may or may not work - use at your own risk.
Since typescript [does not restrict breaking changes to major versions](https://github.com/Microsoft/TypeScript/wiki/Breaking-Changes) we consider this support best effort.
If you run into any unexpected compiler failures against our supported TypeScript versions please let us know by filing an issue on our [JIRA](https://jira.mongodb.org/browse/NODE).
## Installation
The recommended way to get started using the Node.js 5.x driver is by using the `npm` (Node Package Manager) to install the dependency in your project.
After you've created your own project using `npm init`, you can run:
```bash
npm install mongodb
# or ...
yarn add mongodb
```
This will download the MongoDB driver and add a dependency entry in your `package.json` file.
If you are a Typescript user, you will need the Node.js type definitions to use the driver's definitions:
```sh
npm install -D @types/node
```
## Driver Extensions
The MongoDB driver can optionally be enhanced by the following feature packages:
Maintained by MongoDB:
- Zstd network compression - [@mongodb-js/zstd](https://github.com/mongodb-js/zstd)
- MongoDB field level and queryable encryption - [mongodb-client-encryption](https://github.com/mongodb/libmongocrypt#readme)
- GSSAPI / SSPI / Kerberos authentication - [kerberos](https://github.com/mongodb-js/kerberos)
Some of these packages include native C++ extensions.
Consult the [trouble shooting guide here](https://github.com/mongodb/node-mongodb-native/blob/HEAD/etc/notes/native-extensions.md) if you run into compilation issues.
Third party:
- Snappy network compression - [snappy](https://github.com/Brooooooklyn/snappy)
- AWS authentication - [@aws-sdk/credential-providers](https://github.com/aws/aws-sdk-js-v3/tree/main/packages/credential-providers)
## Quick Start
This guide will show you how to set up a simple application using Node.js and MongoDB. Its scope is only how to set up the driver and perform the simple CRUD operations. For more in-depth coverage, see the [official documentation](https://docs.mongodb.com/drivers/node/).
### Create the `package.json` file
First, create a directory where your application will live.
```bash
mkdir myProject
cd myProject
```
Enter the following command and answer the questions to create the initial structure for your new project:
```bash
npm init -y
```
Next, install the driver as a dependency.
```bash
npm install mongodb
```
### Start a MongoDB Server
For complete MongoDB installation instructions, see [the manual](https://docs.mongodb.org/manual/installation/).
1. Download the right MongoDB version from [MongoDB](https://www.mongodb.org/downloads)
2. Create a database directory (in this case under **/data**).
3. Install and start a `mongod` process.
```bash
mongod --dbpath=/data
```
You should see the **mongod** process start up and print some status information.
### Connect to MongoDB
Create a new **app.js** file and add the following code to try out some basic CRUD
operations using the MongoDB driver.
Add code to connect to the server and the database **myProject**:
> **NOTE:** Resolving DNS Connection issues
>
> Node.js 18 changed the default DNS resolution ordering from always prioritizing ipv4 to the ordering
> returned by the DNS provider. In some environments, this can result in `localhost` resolving to
> an ipv6 address instead of ipv4 and a consequent failure to connect to the server.
>
> This can be resolved by:
>
> - specifying the ip address family using the MongoClient `family` option (`MongoClient(<uri>, { family: 4 } )`)
> - launching mongod or mongos with the ipv6 flag enabled ([--ipv6 mongod option documentation](https://www.mongodb.com/docs/manual/reference/program/mongod/#std-option-mongod.--ipv6))
> - using a host of `127.0.0.1` in place of localhost
> - specifying the DNS resolution ordering with the `--dns-resolution-order` Node.js command line argument (e.g. `node --dns-resolution-order=ipv4first`)
```js
const { MongoClient } = require('mongodb');
// or as an es module:
// import { MongoClient } from 'mongodb'
// Connection URL
const url = 'mongodb://localhost:27017';
const client = new MongoClient(url);
// Database Name
const dbName = 'myProject';
async function main() {
// Use connect method to connect to the server
await client.connect();
console.log('Connected successfully to server');
const db = client.db(dbName);
const collection = db.collection('documents');
// the following code examples can be pasted here...
return 'done.';
}
main()
.then(console.log)
.catch(console.error)
.finally(() => client.close());
```
Run your app from the command line with:
```bash
node app.js
```
The application should print **Connected successfully to server** to the console.
### Insert a Document
Add to **app.js** the following function which uses the **insertMany**
method to add three documents to the **documents** collection.
```js
const insertResult = await collection.insertMany([{ a: 1 }, { a: 2 }, { a: 3 }]);
console.log('Inserted documents =>', insertResult);
```
The **insertMany** command returns an object with information about the insert operations.
### Find All Documents
Add a query that returns all the documents.
```js
const findResult = await collection.find({}).toArray();
console.log('Found documents =>', findResult);
```
This query returns all the documents in the **documents** collection.
If you add this below the insertMany example you'll see the document's you've inserted.
### Find Documents with a Query Filter
Add a query filter to find only documents which meet the query criteria.
```js
const filteredDocs = await collection.find({ a: 3 }).toArray();
console.log('Found documents filtered by { a: 3 } =>', filteredDocs);
```
Only the documents which match `'a' : 3` should be returned.
### Update a document
The following operation updates a document in the **documents** collection.
```js
const updateResult = await collection.updateOne({ a: 3 }, { $set: { b: 1 } });
console.log('Updated documents =>', updateResult);
```
The method updates the first document where the field **a** is equal to **3** by adding a new field **b** to the document set to **1**. `updateResult` contains information about whether there was a matching document to update or not.
### Remove a document
Remove the document where the field **a** is equal to **3**.
```js
const deleteResult = await collection.deleteMany({ a: 3 });
console.log('Deleted documents =>', deleteResult);
```
### Index a Collection
[Indexes](https://docs.mongodb.org/manual/indexes/) can improve your application's
performance. The following function creates an index on the **a** field in the
**documents** collection.
```js
const indexName = await collection.createIndex({ a: 1 });
console.log('index name =', indexName);
```
For more detailed information, see the [indexing strategies page](https://docs.mongodb.com/manual/applications/indexes/).
## Error Handling
If you need to filter certain errors from our driver we have a helpful tree of errors described in [etc/notes/errors.md](https://github.com/mongodb/node-mongodb-native/blob/HEAD/etc/notes/errors.md).
It is our recommendation to use `instanceof` checks on errors and to avoid relying on parsing `error.message` and `error.name` strings in your code.
We guarantee `instanceof` checks will pass according to semver guidelines, but errors may be sub-classed or their messages may change at any time, even patch releases, as we see fit to increase the helpfulness of the errors.
Any new errors we add to the driver will directly extend an existing error class and no existing error will be moved to a different parent class outside of a major release.
This means `instanceof` will always be able to accurately capture the errors that our driver throws.
```typescript
const client = new MongoClient(url);
await client.connect();
const collection = client.db().collection('collection');
try {
await collection.insertOne({ _id: 1 });
await collection.insertOne({ _id: 1 }); // duplicate key error
} catch (error) {
if (error instanceof MongoServerError) {
console.log(`Error worth logging: ${error}`); // special case for some reason
}
throw error; // still want to crash
}
```
## Next Steps
- [MongoDB Documentation](https://docs.mongodb.com/manual/)
- [MongoDB Node Driver Documentation](https://docs.mongodb.com/drivers/node/)
- [Read about Schemas](https://docs.mongodb.com/manual/core/data-modeling-introduction/)
- [Star us on GitHub](https://github.com/mongodb/node-mongodb-native)
## License
[Apache 2.0](LICENSE.md)
© 2009-2012 Christian Amor Kvalheim
© 2012-present MongoDB [Contributors](https://github.com/mongodb/node-mongodb-native/blob/HEAD/CONTRIBUTORS.md)

12
node_modules/mongodb/etc/prepare.js generated vendored Executable file
View file

@ -0,0 +1,12 @@
#! /usr/bin/env node
var cp = require('child_process');
var fs = require('fs');
var os = require('os');
if (fs.existsSync('src')) {
cp.spawn('npm', ['run', 'build:dts'], { stdio: 'inherit', shell: os.platform() === 'win32' });
} else {
if (!fs.existsSync('lib')) {
console.warn('MongoDB: No compiled javascript present, the driver is not installed correctly.');
}
}

131
node_modules/mongodb/lib/admin.js generated vendored Normal file
View file

@ -0,0 +1,131 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.Admin = void 0;
const add_user_1 = require("./operations/add_user");
const execute_operation_1 = require("./operations/execute_operation");
const list_databases_1 = require("./operations/list_databases");
const remove_user_1 = require("./operations/remove_user");
const run_command_1 = require("./operations/run_command");
const validate_collection_1 = require("./operations/validate_collection");
/**
* The **Admin** class is an internal class that allows convenient access to
* the admin functionality and commands for MongoDB.
*
* **ADMIN Cannot directly be instantiated**
* @public
*
* @example
* ```ts
* import { MongoClient } from 'mongodb';
*
* const client = new MongoClient('mongodb://localhost:27017');
* const admin = client.db().admin();
* const dbInfo = await admin.listDatabases();
* for (const db of dbInfo.databases) {
* console.log(db.name);
* }
* ```
*/
class Admin {
/**
* Create a new Admin instance
* @internal
*/
constructor(db) {
this.s = { db };
}
/**
* Execute a command
*
* @param command - The command to execute
* @param options - Optional settings for the command
*/
async command(command, options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new run_command_1.RunCommandOperation(this.s.db, command, { dbName: 'admin', ...options }));
}
/**
* Retrieve the server build information
*
* @param options - Optional settings for the command
*/
async buildInfo(options) {
return this.command({ buildinfo: 1 }, options);
}
/**
* Retrieve the server build information
*
* @param options - Optional settings for the command
*/
async serverInfo(options) {
return this.command({ buildinfo: 1 }, options);
}
/**
* Retrieve this db's server status.
*
* @param options - Optional settings for the command
*/
async serverStatus(options) {
return this.command({ serverStatus: 1 }, options);
}
/**
* Ping the MongoDB server and retrieve results
*
* @param options - Optional settings for the command
*/
async ping(options) {
return this.command({ ping: 1 }, options);
}
/**
* Add a user to the database
*
* @param username - The username for the new user
* @param passwordOrOptions - An optional password for the new user, or the options for the command
* @param options - Optional settings for the command
*/
async addUser(username, passwordOrOptions, options) {
options =
options != null && typeof options === 'object'
? options
: passwordOrOptions != null && typeof passwordOrOptions === 'object'
? passwordOrOptions
: undefined;
const password = typeof passwordOrOptions === 'string' ? passwordOrOptions : undefined;
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new add_user_1.AddUserOperation(this.s.db, username, password, { dbName: 'admin', ...options }));
}
/**
* Remove a user from a database
*
* @param username - The username to remove
* @param options - Optional settings for the command
*/
async removeUser(username, options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new remove_user_1.RemoveUserOperation(this.s.db, username, { dbName: 'admin', ...options }));
}
/**
* Validate an existing collection
*
* @param collectionName - The name of the collection to validate.
* @param options - Optional settings for the command
*/
async validateCollection(collectionName, options = {}) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new validate_collection_1.ValidateCollectionOperation(this, collectionName, options));
}
/**
* List the available databases
*
* @param options - Optional settings for the command
*/
async listDatabases(options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new list_databases_1.ListDatabasesOperation(this.s.db, options));
}
/**
* Get ReplicaSet status
*
* @param options - Optional settings for the command
*/
async replSetGetStatus(options) {
return this.command({ replSetGetStatus: 1 }, options);
}
}
exports.Admin = Admin;
//# sourceMappingURL=admin.js.map

1
node_modules/mongodb/lib/admin.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"admin.js","sourceRoot":"","sources":["../src/admin.ts"],"names":[],"mappings":";;;AAEA,oDAAyE;AAEzE,sEAAkE;AAClE,gEAIqC;AACrC,0DAAkF;AAClF,0DAAkF;AAClF,0EAG0C;AAO1C;;;;;;;;;;;;;;;;;;GAkBG;AACH,MAAa,KAAK;IAIhB;;;OAGG;IACH,YAAY,EAAM;QAChB,IAAI,CAAC,CAAC,GAAG,EAAE,EAAE,EAAE,CAAC;IAClB,CAAC;IAED;;;;;OAKG;IACH,KAAK,CAAC,OAAO,CAAC,OAAiB,EAAE,OAA2B;QAC1D,OAAO,IAAA,oCAAgB,EACrB,IAAI,CAAC,CAAC,CAAC,EAAE,CAAC,CAAC,CAAC,MAAM,EAClB,IAAI,iCAAmB,CAAC,IAAI,CAAC,CAAC,CAAC,EAAE,EAAE,OAAO,EAAE,EAAE,MAAM,EAAE,OAAO,EAAE,GAAG,OAAO,EAAE,CAAC,CAC7E,CAAC;IACJ,CAAC;IAED;;;;OAIG;IACH,KAAK,CAAC,SAAS,CAAC,OAAiC;QAC/C,OAAO,IAAI,CAAC,OAAO,CAAC,EAAE,SAAS,EAAE,CAAC,EAAE,EAAE,OAAO,CAAC,CAAC;IACjD,CAAC;IAED;;;;OAIG;IACH,KAAK,CAAC,UAAU,CAAC,OAAiC;QAChD,OAAO,IAAI,CAAC,OAAO,CAAC,EAAE,SAAS,EAAE,CAAC,EAAE,EAAE,OAAO,CAAC,CAAC;IACjD,CAAC;IAED;;;;OAIG;IACH,KAAK,CAAC,YAAY,CAAC,OAAiC;QAClD,OAAO,IAAI,CAAC,OAAO,CAAC,EAAE,YAAY,EAAE,CAAC,EAAE,EAAE,OAAO,CAAC,CAAC;IACpD,CAAC;IAED;;;;OAIG;IACH,KAAK,CAAC,IAAI,CAAC,OAAiC;QAC1C,OAAO,IAAI,CAAC,OAAO,CAAC,EAAE,IAAI,EAAE,CAAC,EAAE,EAAE,OAAO,CAAC,CAAC;IAC5C,CAAC;IAED;;;;;;OAMG;IACH,KAAK,CAAC,OAAO,CACX,QAAgB,EAChB,iBAA2C,EAC3C,OAAwB;QAExB,OAAO;YACL,OAAO,IAAI,IAAI,IAAI,OAAO,OAAO,KAAK,QAAQ;gBAC5C,CAAC,CAAC,OAAO;gBACT,CAAC,CAAC,iBAAiB,IAAI,IAAI,IAAI,OAAO,iBAAiB,KAAK,QAAQ;oBACpE,CAAC,CAAC,iBAAiB;oBACnB,CAAC,CAAC,SAAS,CAAC;QAChB,MAAM,QAAQ,GAAG,OAAO,iBAAiB,KAAK,QAAQ,CAAC,CAAC,CAAC,iBAAiB,CAAC,CAAC,CAAC,SAAS,CAAC;QACvF,OAAO,IAAA,oCAAgB,EACrB,IAAI,CAAC,CAAC,CAAC,EAAE,CAAC,CAAC,CAAC,MAAM,EAClB,IAAI,2BAAgB,CAAC,IAAI,CAAC,CAAC,CAAC,EAAE,EAAE,QAAQ,EAAE,QAAQ,EAAE,EAAE,MAAM,EAAE,OAAO,EAAE,GAAG,OAAO,EAAE,CAAC,CACrF,CAAC;IACJ,CAAC;IAED;;;;;OAKG;IACH,KAAK,CAAC,UAAU,CAAC,QAAgB,EAAE,OAA2B;QAC5D,OAAO,IAAA,oCAAgB,EACrB,IAAI,CAAC,CAAC,CAAC,EAAE,CAAC,CAAC,CAAC,MAAM,EAClB,IAAI,iCAAmB,CAAC,IAAI,CAAC,CAAC,CAAC,EAAE,EAAE,QAAQ,EAAE,EAAE,MAAM,EAAE,OAAO,EAAE,GAAG,OAAO,EAAE,CAAC,CAC9E,CAAC;IACJ,CAAC;IAED;;;;;OAKG;IACH,KAAK,CAAC,kBAAkB,CACtB,cAAsB,EACtB,UAAqC,EAAE;QAEvC,OAAO,IAAA,oCAAgB,EACrB,IAAI,CAAC,CAAC,CAAC,EAAE,CAAC,CAAC,CAAC,MAAM,EAClB,IAAI,iDAA2B,CAAC,IAAI,EAAE,cAAc,EAAE,OAAO,CAAC,CAC/D,CAAC;IACJ,CAAC;IAED;;;;OAIG;IACH,KAAK,CAAC,aAAa,CAAC,OAA8B;QAChD,OAAO,IAAA,oCAAgB,EAAC,IAAI,CAAC,CAAC,CAAC,EAAE,CAAC,CAAC,CAAC,MAAM,EAAE,IAAI,uCAAsB,CAAC,IAAI,CAAC,CAAC,CAAC,EAAE,EAAE,OAAO,CAAC,CAAC,CAAC;IAC9F,CAAC;IAED;;;;OAIG;IACH,KAAK,CAAC,gBAAgB,CAAC,OAAiC;QACtD,OAAO,IAAI,CAAC,OAAO,CAAC,EAAE,gBAAgB,EAAE,CAAC,EAAE,EAAE,OAAO,CAAC,CAAC;IACxD,CAAC;CACF;AApID,sBAoIC"}

61
node_modules/mongodb/lib/bson.js generated vendored Normal file
View file

@ -0,0 +1,61 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.resolveBSONOptions = exports.pluckBSONSerializeOptions = exports.Timestamp = exports.serialize = exports.ObjectId = exports.MinKey = exports.MaxKey = exports.Long = exports.Int32 = exports.Double = exports.deserialize = exports.Decimal128 = exports.DBRef = exports.Code = exports.calculateObjectSize = exports.BSONType = exports.BSONSymbol = exports.BSONRegExp = exports.BSON = exports.Binary = void 0;
var bson_1 = require("bson");
Object.defineProperty(exports, "Binary", { enumerable: true, get: function () { return bson_1.Binary; } });
Object.defineProperty(exports, "BSON", { enumerable: true, get: function () { return bson_1.BSON; } });
Object.defineProperty(exports, "BSONRegExp", { enumerable: true, get: function () { return bson_1.BSONRegExp; } });
Object.defineProperty(exports, "BSONSymbol", { enumerable: true, get: function () { return bson_1.BSONSymbol; } });
Object.defineProperty(exports, "BSONType", { enumerable: true, get: function () { return bson_1.BSONType; } });
Object.defineProperty(exports, "calculateObjectSize", { enumerable: true, get: function () { return bson_1.calculateObjectSize; } });
Object.defineProperty(exports, "Code", { enumerable: true, get: function () { return bson_1.Code; } });
Object.defineProperty(exports, "DBRef", { enumerable: true, get: function () { return bson_1.DBRef; } });
Object.defineProperty(exports, "Decimal128", { enumerable: true, get: function () { return bson_1.Decimal128; } });
Object.defineProperty(exports, "deserialize", { enumerable: true, get: function () { return bson_1.deserialize; } });
Object.defineProperty(exports, "Double", { enumerable: true, get: function () { return bson_1.Double; } });
Object.defineProperty(exports, "Int32", { enumerable: true, get: function () { return bson_1.Int32; } });
Object.defineProperty(exports, "Long", { enumerable: true, get: function () { return bson_1.Long; } });
Object.defineProperty(exports, "MaxKey", { enumerable: true, get: function () { return bson_1.MaxKey; } });
Object.defineProperty(exports, "MinKey", { enumerable: true, get: function () { return bson_1.MinKey; } });
Object.defineProperty(exports, "ObjectId", { enumerable: true, get: function () { return bson_1.ObjectId; } });
Object.defineProperty(exports, "serialize", { enumerable: true, get: function () { return bson_1.serialize; } });
Object.defineProperty(exports, "Timestamp", { enumerable: true, get: function () { return bson_1.Timestamp; } });
function pluckBSONSerializeOptions(options) {
const { fieldsAsRaw, useBigInt64, promoteValues, promoteBuffers, promoteLongs, serializeFunctions, ignoreUndefined, bsonRegExp, raw, enableUtf8Validation } = options;
return {
fieldsAsRaw,
useBigInt64,
promoteValues,
promoteBuffers,
promoteLongs,
serializeFunctions,
ignoreUndefined,
bsonRegExp,
raw,
enableUtf8Validation
};
}
exports.pluckBSONSerializeOptions = pluckBSONSerializeOptions;
/**
* Merge the given BSONSerializeOptions, preferring options over the parent's options, and
* substituting defaults for values not set.
*
* @internal
*/
function resolveBSONOptions(options, parent) {
const parentOptions = parent?.bsonOptions;
return {
raw: options?.raw ?? parentOptions?.raw ?? false,
useBigInt64: options?.useBigInt64 ?? parentOptions?.useBigInt64 ?? false,
promoteLongs: options?.promoteLongs ?? parentOptions?.promoteLongs ?? true,
promoteValues: options?.promoteValues ?? parentOptions?.promoteValues ?? true,
promoteBuffers: options?.promoteBuffers ?? parentOptions?.promoteBuffers ?? false,
ignoreUndefined: options?.ignoreUndefined ?? parentOptions?.ignoreUndefined ?? false,
bsonRegExp: options?.bsonRegExp ?? parentOptions?.bsonRegExp ?? false,
serializeFunctions: options?.serializeFunctions ?? parentOptions?.serializeFunctions ?? false,
fieldsAsRaw: options?.fieldsAsRaw ?? parentOptions?.fieldsAsRaw ?? {},
enableUtf8Validation: options?.enableUtf8Validation ?? parentOptions?.enableUtf8Validation ?? true
};
}
exports.resolveBSONOptions = resolveBSONOptions;
//# sourceMappingURL=bson.js.map

1
node_modules/mongodb/lib/bson.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"bson.js","sourceRoot":"","sources":["../src/bson.ts"],"names":[],"mappings":";;;AAEA,6BAoBc;AAnBZ,8FAAA,MAAM,OAAA;AACN,4FAAA,IAAI,OAAA;AACJ,kGAAA,UAAU,OAAA;AACV,kGAAA,UAAU,OAAA;AACV,gGAAA,QAAQ,OAAA;AACR,2GAAA,mBAAmB,OAAA;AACnB,4FAAA,IAAI,OAAA;AACJ,6FAAA,KAAK,OAAA;AACL,kGAAA,UAAU,OAAA;AACV,mGAAA,WAAW,OAAA;AAEX,8FAAA,MAAM,OAAA;AACN,6FAAA,KAAK,OAAA;AACL,4FAAA,IAAI,OAAA;AACJ,8FAAA,MAAM,OAAA;AACN,8FAAA,MAAM,OAAA;AACN,gGAAA,QAAQ,OAAA;AACR,iGAAA,SAAS,OAAA;AACT,iGAAA,SAAS,OAAA;AA4CX,SAAgB,yBAAyB,CAAC,OAA6B;IACrE,MAAM,EACJ,WAAW,EACX,WAAW,EACX,aAAa,EACb,cAAc,EACd,YAAY,EACZ,kBAAkB,EAClB,eAAe,EACf,UAAU,EACV,GAAG,EACH,oBAAoB,EACrB,GAAG,OAAO,CAAC;IACZ,OAAO;QACL,WAAW;QACX,WAAW;QACX,aAAa;QACb,cAAc;QACd,YAAY;QACZ,kBAAkB;QAClB,eAAe;QACf,UAAU;QACV,GAAG;QACH,oBAAoB;KACrB,CAAC;AACJ,CAAC;AAzBD,8DAyBC;AAED;;;;;GAKG;AACH,SAAgB,kBAAkB,CAChC,OAA8B,EAC9B,MAA+C;IAE/C,MAAM,aAAa,GAAG,MAAM,EAAE,WAAW,CAAC;IAC1C,OAAO;QACL,GAAG,EAAE,OAAO,EAAE,GAAG,IAAI,aAAa,EAAE,GAAG,IAAI,KAAK;QAChD,WAAW,EAAE,OAAO,EAAE,WAAW,IAAI,aAAa,EAAE,WAAW,IAAI,KAAK;QACxE,YAAY,EAAE,OAAO,EAAE,YAAY,IAAI,aAAa,EAAE,YAAY,IAAI,IAAI;QAC1E,aAAa,EAAE,OAAO,EAAE,aAAa,IAAI,aAAa,EAAE,aAAa,IAAI,IAAI;QAC7E,cAAc,EAAE,OAAO,EAAE,cAAc,IAAI,aAAa,EAAE,cAAc,IAAI,KAAK;QACjF,eAAe,EAAE,OAAO,EAAE,eAAe,IAAI,aAAa,EAAE,eAAe,IAAI,KAAK;QACpF,UAAU,EAAE,OAAO,EAAE,UAAU,IAAI,aAAa,EAAE,UAAU,IAAI,KAAK;QACrE,kBAAkB,EAAE,OAAO,EAAE,kBAAkB,IAAI,aAAa,EAAE,kBAAkB,IAAI,KAAK;QAC7F,WAAW,EAAE,OAAO,EAAE,WAAW,IAAI,aAAa,EAAE,WAAW,IAAI,EAAE;QACrE,oBAAoB,EAClB,OAAO,EAAE,oBAAoB,IAAI,aAAa,EAAE,oBAAoB,IAAI,IAAI;KAC/E,CAAC;AACJ,CAAC;AAlBD,gDAkBC"}

874
node_modules/mongodb/lib/bulk/common.js generated vendored Normal file
View file

@ -0,0 +1,874 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.BulkOperationBase = exports.FindOperators = exports.MongoBulkWriteError = exports.mergeBatchResults = exports.WriteError = exports.WriteConcernError = exports.BulkWriteResult = exports.Batch = exports.BatchType = void 0;
const bson_1 = require("../bson");
const error_1 = require("../error");
const delete_1 = require("../operations/delete");
const execute_operation_1 = require("../operations/execute_operation");
const insert_1 = require("../operations/insert");
const operation_1 = require("../operations/operation");
const update_1 = require("../operations/update");
const utils_1 = require("../utils");
const write_concern_1 = require("../write_concern");
/** @internal */
const kServerError = Symbol('serverError');
/** @public */
exports.BatchType = Object.freeze({
INSERT: 1,
UPDATE: 2,
DELETE: 3
});
/**
* Keeps the state of a unordered batch so we can rewrite the results
* correctly after command execution
*
* @public
*/
class Batch {
constructor(batchType, originalZeroIndex) {
this.originalZeroIndex = originalZeroIndex;
this.currentIndex = 0;
this.originalIndexes = [];
this.batchType = batchType;
this.operations = [];
this.size = 0;
this.sizeBytes = 0;
}
}
exports.Batch = Batch;
/**
* @public
* The result of a bulk write.
*/
class BulkWriteResult {
static generateIdMap(ids) {
const idMap = {};
for (const doc of ids) {
idMap[doc.index] = doc._id;
}
return idMap;
}
/**
* Create a new BulkWriteResult instance
* @internal
*/
constructor(bulkResult) {
this.result = bulkResult;
this.insertedCount = this.result.nInserted ?? 0;
this.matchedCount = this.result.nMatched ?? 0;
this.modifiedCount = this.result.nModified ?? 0;
this.deletedCount = this.result.nRemoved ?? 0;
this.upsertedCount = this.result.upserted.length ?? 0;
this.upsertedIds = BulkWriteResult.generateIdMap(this.result.upserted);
this.insertedIds = BulkWriteResult.generateIdMap(this.result.insertedIds);
Object.defineProperty(this, 'result', { value: this.result, enumerable: false });
}
/** Evaluates to true if the bulk operation correctly executes */
get ok() {
return this.result.ok;
}
/** The number of inserted documents */
get nInserted() {
return this.result.nInserted;
}
/** Number of upserted documents */
get nUpserted() {
return this.result.nUpserted;
}
/** Number of matched documents */
get nMatched() {
return this.result.nMatched;
}
/** Number of documents updated physically on disk */
get nModified() {
return this.result.nModified;
}
/** Number of removed documents */
get nRemoved() {
return this.result.nRemoved;
}
/** Returns an array of all inserted ids */
getInsertedIds() {
return this.result.insertedIds;
}
/** Returns an array of all upserted ids */
getUpsertedIds() {
return this.result.upserted;
}
/** Returns the upserted id at the given index */
getUpsertedIdAt(index) {
return this.result.upserted[index];
}
/** Returns raw internal result */
getRawResponse() {
return this.result;
}
/** Returns true if the bulk operation contains a write error */
hasWriteErrors() {
return this.result.writeErrors.length > 0;
}
/** Returns the number of write errors off the bulk operation */
getWriteErrorCount() {
return this.result.writeErrors.length;
}
/** Returns a specific write error object */
getWriteErrorAt(index) {
return index < this.result.writeErrors.length ? this.result.writeErrors[index] : undefined;
}
/** Retrieve all write errors */
getWriteErrors() {
return this.result.writeErrors;
}
/** Retrieve the write concern error if one exists */
getWriteConcernError() {
if (this.result.writeConcernErrors.length === 0) {
return;
}
else if (this.result.writeConcernErrors.length === 1) {
// Return the error
return this.result.writeConcernErrors[0];
}
else {
// Combine the errors
let errmsg = '';
for (let i = 0; i < this.result.writeConcernErrors.length; i++) {
const err = this.result.writeConcernErrors[i];
errmsg = errmsg + err.errmsg;
// TODO: Something better
if (i === 0)
errmsg = errmsg + ' and ';
}
return new WriteConcernError({ errmsg, code: error_1.MONGODB_ERROR_CODES.WriteConcernFailed });
}
}
toString() {
return `BulkWriteResult(${this.result})`;
}
isOk() {
return this.result.ok === 1;
}
}
exports.BulkWriteResult = BulkWriteResult;
/**
* An error representing a failure by the server to apply the requested write concern to the bulk operation.
* @public
* @category Error
*/
class WriteConcernError {
constructor(error) {
this[kServerError] = error;
}
/** Write concern error code. */
get code() {
return this[kServerError].code;
}
/** Write concern error message. */
get errmsg() {
return this[kServerError].errmsg;
}
/** Write concern error info. */
get errInfo() {
return this[kServerError].errInfo;
}
toJSON() {
return this[kServerError];
}
toString() {
return `WriteConcernError(${this.errmsg})`;
}
}
exports.WriteConcernError = WriteConcernError;
/**
* An error that occurred during a BulkWrite on the server.
* @public
* @category Error
*/
class WriteError {
constructor(err) {
this.err = err;
}
/** WriteError code. */
get code() {
return this.err.code;
}
/** WriteError original bulk operation index. */
get index() {
return this.err.index;
}
/** WriteError message. */
get errmsg() {
return this.err.errmsg;
}
/** WriteError details. */
get errInfo() {
return this.err.errInfo;
}
/** Returns the underlying operation that caused the error */
getOperation() {
return this.err.op;
}
toJSON() {
return { code: this.err.code, index: this.err.index, errmsg: this.err.errmsg, op: this.err.op };
}
toString() {
return `WriteError(${JSON.stringify(this.toJSON())})`;
}
}
exports.WriteError = WriteError;
/** Merges results into shared data structure */
function mergeBatchResults(batch, bulkResult, err, result) {
// If we have an error set the result to be the err object
if (err) {
result = err;
}
else if (result && result.result) {
result = result.result;
}
if (result == null) {
return;
}
// Do we have a top level error stop processing and return
if (result.ok === 0 && bulkResult.ok === 1) {
bulkResult.ok = 0;
const writeError = {
index: 0,
code: result.code || 0,
errmsg: result.message,
errInfo: result.errInfo,
op: batch.operations[0]
};
bulkResult.writeErrors.push(new WriteError(writeError));
return;
}
else if (result.ok === 0 && bulkResult.ok === 0) {
return;
}
// If we have an insert Batch type
if (isInsertBatch(batch) && result.n) {
bulkResult.nInserted = bulkResult.nInserted + result.n;
}
// If we have an insert Batch type
if (isDeleteBatch(batch) && result.n) {
bulkResult.nRemoved = bulkResult.nRemoved + result.n;
}
let nUpserted = 0;
// We have an array of upserted values, we need to rewrite the indexes
if (Array.isArray(result.upserted)) {
nUpserted = result.upserted.length;
for (let i = 0; i < result.upserted.length; i++) {
bulkResult.upserted.push({
index: result.upserted[i].index + batch.originalZeroIndex,
_id: result.upserted[i]._id
});
}
}
else if (result.upserted) {
nUpserted = 1;
bulkResult.upserted.push({
index: batch.originalZeroIndex,
_id: result.upserted
});
}
// If we have an update Batch type
if (isUpdateBatch(batch) && result.n) {
const nModified = result.nModified;
bulkResult.nUpserted = bulkResult.nUpserted + nUpserted;
bulkResult.nMatched = bulkResult.nMatched + (result.n - nUpserted);
if (typeof nModified === 'number') {
bulkResult.nModified = bulkResult.nModified + nModified;
}
else {
bulkResult.nModified = 0;
}
}
if (Array.isArray(result.writeErrors)) {
for (let i = 0; i < result.writeErrors.length; i++) {
const writeError = {
index: batch.originalIndexes[result.writeErrors[i].index],
code: result.writeErrors[i].code,
errmsg: result.writeErrors[i].errmsg,
errInfo: result.writeErrors[i].errInfo,
op: batch.operations[result.writeErrors[i].index]
};
bulkResult.writeErrors.push(new WriteError(writeError));
}
}
if (result.writeConcernError) {
bulkResult.writeConcernErrors.push(new WriteConcernError(result.writeConcernError));
}
}
exports.mergeBatchResults = mergeBatchResults;
function executeCommands(bulkOperation, options, callback) {
if (bulkOperation.s.batches.length === 0) {
return callback(undefined, new BulkWriteResult(bulkOperation.s.bulkResult));
}
const batch = bulkOperation.s.batches.shift();
function resultHandler(err, result) {
// Error is a driver related error not a bulk op error, return early
if (err && 'message' in err && !(err instanceof error_1.MongoWriteConcernError)) {
return callback(new MongoBulkWriteError(err, new BulkWriteResult(bulkOperation.s.bulkResult)));
}
if (err instanceof error_1.MongoWriteConcernError) {
return handleMongoWriteConcernError(batch, bulkOperation.s.bulkResult, err, callback);
}
// Merge the results together
mergeBatchResults(batch, bulkOperation.s.bulkResult, err, result);
const writeResult = new BulkWriteResult(bulkOperation.s.bulkResult);
if (bulkOperation.handleWriteError(callback, writeResult))
return;
// Execute the next command in line
executeCommands(bulkOperation, options, callback);
}
const finalOptions = (0, utils_1.resolveOptions)(bulkOperation, {
...options,
ordered: bulkOperation.isOrdered
});
if (finalOptions.bypassDocumentValidation !== true) {
delete finalOptions.bypassDocumentValidation;
}
// Set an operationIf if provided
if (bulkOperation.operationId) {
resultHandler.operationId = bulkOperation.operationId;
}
// Is the bypassDocumentValidation options specific
if (bulkOperation.s.bypassDocumentValidation === true) {
finalOptions.bypassDocumentValidation = true;
}
// Is the checkKeys option disabled
if (bulkOperation.s.checkKeys === false) {
finalOptions.checkKeys = false;
}
if (finalOptions.retryWrites) {
if (isUpdateBatch(batch)) {
finalOptions.retryWrites = finalOptions.retryWrites && !batch.operations.some(op => op.multi);
}
if (isDeleteBatch(batch)) {
finalOptions.retryWrites =
finalOptions.retryWrites && !batch.operations.some(op => op.limit === 0);
}
}
try {
if (isInsertBatch(batch)) {
(0, execute_operation_1.executeOperation)(bulkOperation.s.collection.s.db.s.client, new insert_1.InsertOperation(bulkOperation.s.namespace, batch.operations, finalOptions), resultHandler);
}
else if (isUpdateBatch(batch)) {
(0, execute_operation_1.executeOperation)(bulkOperation.s.collection.s.db.s.client, new update_1.UpdateOperation(bulkOperation.s.namespace, batch.operations, finalOptions), resultHandler);
}
else if (isDeleteBatch(batch)) {
(0, execute_operation_1.executeOperation)(bulkOperation.s.collection.s.db.s.client, new delete_1.DeleteOperation(bulkOperation.s.namespace, batch.operations, finalOptions), resultHandler);
}
}
catch (err) {
// Force top level error
err.ok = 0;
// Merge top level error and return
mergeBatchResults(batch, bulkOperation.s.bulkResult, err, undefined);
callback();
}
}
function handleMongoWriteConcernError(batch, bulkResult, err, callback) {
mergeBatchResults(batch, bulkResult, undefined, err.result);
callback(new MongoBulkWriteError({
message: err.result?.writeConcernError.errmsg,
code: err.result?.writeConcernError.result
}, new BulkWriteResult(bulkResult)));
}
/**
* An error indicating an unsuccessful Bulk Write
* @public
* @category Error
*/
class MongoBulkWriteError extends error_1.MongoServerError {
/** Creates a new MongoBulkWriteError */
constructor(error, result) {
super(error);
this.writeErrors = [];
if (error instanceof WriteConcernError)
this.err = error;
else if (!(error instanceof Error)) {
this.message = error.message;
this.code = error.code;
this.writeErrors = error.writeErrors ?? [];
}
this.result = result;
Object.assign(this, error);
}
get name() {
return 'MongoBulkWriteError';
}
/** Number of documents inserted. */
get insertedCount() {
return this.result.insertedCount;
}
/** Number of documents matched for update. */
get matchedCount() {
return this.result.matchedCount;
}
/** Number of documents modified. */
get modifiedCount() {
return this.result.modifiedCount;
}
/** Number of documents deleted. */
get deletedCount() {
return this.result.deletedCount;
}
/** Number of documents upserted. */
get upsertedCount() {
return this.result.upsertedCount;
}
/** Inserted document generated Id's, hash key is the index of the originating operation */
get insertedIds() {
return this.result.insertedIds;
}
/** Upserted document generated Id's, hash key is the index of the originating operation */
get upsertedIds() {
return this.result.upsertedIds;
}
}
exports.MongoBulkWriteError = MongoBulkWriteError;
/**
* A builder object that is returned from {@link BulkOperationBase#find}.
* Is used to build a write operation that involves a query filter.
*
* @public
*/
class FindOperators {
/**
* Creates a new FindOperators object.
* @internal
*/
constructor(bulkOperation) {
this.bulkOperation = bulkOperation;
}
/** Add a multiple update operation to the bulk operation */
update(updateDocument) {
const currentOp = buildCurrentOp(this.bulkOperation);
return this.bulkOperation.addToOperationsList(exports.BatchType.UPDATE, (0, update_1.makeUpdateStatement)(currentOp.selector, updateDocument, {
...currentOp,
multi: true
}));
}
/** Add a single update operation to the bulk operation */
updateOne(updateDocument) {
if (!(0, utils_1.hasAtomicOperators)(updateDocument)) {
throw new error_1.MongoInvalidArgumentError('Update document requires atomic operators');
}
const currentOp = buildCurrentOp(this.bulkOperation);
return this.bulkOperation.addToOperationsList(exports.BatchType.UPDATE, (0, update_1.makeUpdateStatement)(currentOp.selector, updateDocument, { ...currentOp, multi: false }));
}
/** Add a replace one operation to the bulk operation */
replaceOne(replacement) {
if ((0, utils_1.hasAtomicOperators)(replacement)) {
throw new error_1.MongoInvalidArgumentError('Replacement document must not use atomic operators');
}
const currentOp = buildCurrentOp(this.bulkOperation);
return this.bulkOperation.addToOperationsList(exports.BatchType.UPDATE, (0, update_1.makeUpdateStatement)(currentOp.selector, replacement, { ...currentOp, multi: false }));
}
/** Add a delete one operation to the bulk operation */
deleteOne() {
const currentOp = buildCurrentOp(this.bulkOperation);
return this.bulkOperation.addToOperationsList(exports.BatchType.DELETE, (0, delete_1.makeDeleteStatement)(currentOp.selector, { ...currentOp, limit: 1 }));
}
/** Add a delete many operation to the bulk operation */
delete() {
const currentOp = buildCurrentOp(this.bulkOperation);
return this.bulkOperation.addToOperationsList(exports.BatchType.DELETE, (0, delete_1.makeDeleteStatement)(currentOp.selector, { ...currentOp, limit: 0 }));
}
/** Upsert modifier for update bulk operation, noting that this operation is an upsert. */
upsert() {
if (!this.bulkOperation.s.currentOp) {
this.bulkOperation.s.currentOp = {};
}
this.bulkOperation.s.currentOp.upsert = true;
return this;
}
/** Specifies the collation for the query condition. */
collation(collation) {
if (!this.bulkOperation.s.currentOp) {
this.bulkOperation.s.currentOp = {};
}
this.bulkOperation.s.currentOp.collation = collation;
return this;
}
/** Specifies arrayFilters for UpdateOne or UpdateMany bulk operations. */
arrayFilters(arrayFilters) {
if (!this.bulkOperation.s.currentOp) {
this.bulkOperation.s.currentOp = {};
}
this.bulkOperation.s.currentOp.arrayFilters = arrayFilters;
return this;
}
/** Specifies hint for the bulk operation. */
hint(hint) {
if (!this.bulkOperation.s.currentOp) {
this.bulkOperation.s.currentOp = {};
}
this.bulkOperation.s.currentOp.hint = hint;
return this;
}
}
exports.FindOperators = FindOperators;
/**
* TODO(NODE-4063)
* BulkWrites merge complexity is implemented in executeCommands
* This provides a vehicle to treat bulkOperations like any other operation (hence "shim")
* We would like this logic to simply live inside the BulkWriteOperation class
* @internal
*/
class BulkWriteShimOperation extends operation_1.AbstractOperation {
constructor(bulkOperation, options) {
super(options);
this.bulkOperation = bulkOperation;
}
execute(server, session, callback) {
if (this.options.session == null) {
// An implicit session could have been created by 'executeOperation'
// So if we stick it on finalOptions here, each bulk operation
// will use this same session, it'll be passed in the same way
// an explicit session would be
this.options.session = session;
}
return executeCommands(this.bulkOperation, this.options, callback);
}
}
/** @public */
class BulkOperationBase {
/**
* Create a new OrderedBulkOperation or UnorderedBulkOperation instance
* @internal
*/
constructor(collection, options, isOrdered) {
// determine whether bulkOperation is ordered or unordered
this.isOrdered = isOrdered;
const topology = (0, utils_1.getTopology)(collection);
options = options == null ? {} : options;
// TODO Bring from driver information in hello
// Get the namespace for the write operations
const namespace = collection.s.namespace;
// Used to mark operation as executed
const executed = false;
// Current item
const currentOp = undefined;
// Set max byte size
const hello = topology.lastHello();
// If we have autoEncryption on, batch-splitting must be done on 2mb chunks, but single documents
// over 2mb are still allowed
const usingAutoEncryption = !!(topology.s.options && topology.s.options.autoEncrypter);
const maxBsonObjectSize = hello && hello.maxBsonObjectSize ? hello.maxBsonObjectSize : 1024 * 1024 * 16;
const maxBatchSizeBytes = usingAutoEncryption ? 1024 * 1024 * 2 : maxBsonObjectSize;
const maxWriteBatchSize = hello && hello.maxWriteBatchSize ? hello.maxWriteBatchSize : 1000;
// Calculates the largest possible size of an Array key, represented as a BSON string
// element. This calculation:
// 1 byte for BSON type
// # of bytes = length of (string representation of (maxWriteBatchSize - 1))
// + 1 bytes for null terminator
const maxKeySize = (maxWriteBatchSize - 1).toString(10).length + 2;
// Final options for retryable writes
let finalOptions = Object.assign({}, options);
finalOptions = (0, utils_1.applyRetryableWrites)(finalOptions, collection.s.db);
// Final results
const bulkResult = {
ok: 1,
writeErrors: [],
writeConcernErrors: [],
insertedIds: [],
nInserted: 0,
nUpserted: 0,
nMatched: 0,
nModified: 0,
nRemoved: 0,
upserted: []
};
// Internal state
this.s = {
// Final result
bulkResult,
// Current batch state
currentBatch: undefined,
currentIndex: 0,
// ordered specific
currentBatchSize: 0,
currentBatchSizeBytes: 0,
// unordered specific
currentInsertBatch: undefined,
currentUpdateBatch: undefined,
currentRemoveBatch: undefined,
batches: [],
// Write concern
writeConcern: write_concern_1.WriteConcern.fromOptions(options),
// Max batch size options
maxBsonObjectSize,
maxBatchSizeBytes,
maxWriteBatchSize,
maxKeySize,
// Namespace
namespace,
// Topology
topology,
// Options
options: finalOptions,
// BSON options
bsonOptions: (0, bson_1.resolveBSONOptions)(options),
// Current operation
currentOp,
// Executed
executed,
// Collection
collection,
// Fundamental error
err: undefined,
// check keys
checkKeys: typeof options.checkKeys === 'boolean' ? options.checkKeys : false
};
// bypass Validation
if (options.bypassDocumentValidation === true) {
this.s.bypassDocumentValidation = true;
}
}
/**
* Add a single insert document to the bulk operation
*
* @example
* ```ts
* const bulkOp = collection.initializeOrderedBulkOp();
*
* // Adds three inserts to the bulkOp.
* bulkOp
* .insert({ a: 1 })
* .insert({ b: 2 })
* .insert({ c: 3 });
* await bulkOp.execute();
* ```
*/
insert(document) {
if (document._id == null && !shouldForceServerObjectId(this)) {
document._id = new bson_1.ObjectId();
}
return this.addToOperationsList(exports.BatchType.INSERT, document);
}
/**
* Builds a find operation for an update/updateOne/delete/deleteOne/replaceOne.
* Returns a builder object used to complete the definition of the operation.
*
* @example
* ```ts
* const bulkOp = collection.initializeOrderedBulkOp();
*
* // Add an updateOne to the bulkOp
* bulkOp.find({ a: 1 }).updateOne({ $set: { b: 2 } });
*
* // Add an updateMany to the bulkOp
* bulkOp.find({ c: 3 }).update({ $set: { d: 4 } });
*
* // Add an upsert
* bulkOp.find({ e: 5 }).upsert().updateOne({ $set: { f: 6 } });
*
* // Add a deletion
* bulkOp.find({ g: 7 }).deleteOne();
*
* // Add a multi deletion
* bulkOp.find({ h: 8 }).delete();
*
* // Add a replaceOne
* bulkOp.find({ i: 9 }).replaceOne({writeConcern: { j: 10 }});
*
* // Update using a pipeline (requires Mongodb 4.2 or higher)
* bulk.find({ k: 11, y: { $exists: true }, z: { $exists: true } }).updateOne([
* { $set: { total: { $sum: [ '$y', '$z' ] } } }
* ]);
*
* // All of the ops will now be executed
* await bulkOp.execute();
* ```
*/
find(selector) {
if (!selector) {
throw new error_1.MongoInvalidArgumentError('Bulk find operation must specify a selector');
}
// Save a current selector
this.s.currentOp = {
selector: selector
};
return new FindOperators(this);
}
/** Specifies a raw operation to perform in the bulk write. */
raw(op) {
if (op == null || typeof op !== 'object') {
throw new error_1.MongoInvalidArgumentError('Operation must be an object with an operation key');
}
if ('insertOne' in op) {
const forceServerObjectId = shouldForceServerObjectId(this);
if (op.insertOne && op.insertOne.document == null) {
// NOTE: provided for legacy support, but this is a malformed operation
if (forceServerObjectId !== true && op.insertOne._id == null) {
op.insertOne._id = new bson_1.ObjectId();
}
return this.addToOperationsList(exports.BatchType.INSERT, op.insertOne);
}
if (forceServerObjectId !== true && op.insertOne.document._id == null) {
op.insertOne.document._id = new bson_1.ObjectId();
}
return this.addToOperationsList(exports.BatchType.INSERT, op.insertOne.document);
}
if ('replaceOne' in op || 'updateOne' in op || 'updateMany' in op) {
if ('replaceOne' in op) {
if ('q' in op.replaceOne) {
throw new error_1.MongoInvalidArgumentError('Raw operations are not allowed');
}
const updateStatement = (0, update_1.makeUpdateStatement)(op.replaceOne.filter, op.replaceOne.replacement, { ...op.replaceOne, multi: false });
if ((0, utils_1.hasAtomicOperators)(updateStatement.u)) {
throw new error_1.MongoInvalidArgumentError('Replacement document must not use atomic operators');
}
return this.addToOperationsList(exports.BatchType.UPDATE, updateStatement);
}
if ('updateOne' in op) {
if ('q' in op.updateOne) {
throw new error_1.MongoInvalidArgumentError('Raw operations are not allowed');
}
const updateStatement = (0, update_1.makeUpdateStatement)(op.updateOne.filter, op.updateOne.update, {
...op.updateOne,
multi: false
});
if (!(0, utils_1.hasAtomicOperators)(updateStatement.u)) {
throw new error_1.MongoInvalidArgumentError('Update document requires atomic operators');
}
return this.addToOperationsList(exports.BatchType.UPDATE, updateStatement);
}
if ('updateMany' in op) {
if ('q' in op.updateMany) {
throw new error_1.MongoInvalidArgumentError('Raw operations are not allowed');
}
const updateStatement = (0, update_1.makeUpdateStatement)(op.updateMany.filter, op.updateMany.update, {
...op.updateMany,
multi: true
});
if (!(0, utils_1.hasAtomicOperators)(updateStatement.u)) {
throw new error_1.MongoInvalidArgumentError('Update document requires atomic operators');
}
return this.addToOperationsList(exports.BatchType.UPDATE, updateStatement);
}
}
if ('deleteOne' in op) {
if ('q' in op.deleteOne) {
throw new error_1.MongoInvalidArgumentError('Raw operations are not allowed');
}
return this.addToOperationsList(exports.BatchType.DELETE, (0, delete_1.makeDeleteStatement)(op.deleteOne.filter, { ...op.deleteOne, limit: 1 }));
}
if ('deleteMany' in op) {
if ('q' in op.deleteMany) {
throw new error_1.MongoInvalidArgumentError('Raw operations are not allowed');
}
return this.addToOperationsList(exports.BatchType.DELETE, (0, delete_1.makeDeleteStatement)(op.deleteMany.filter, { ...op.deleteMany, limit: 0 }));
}
// otherwise an unknown operation was provided
throw new error_1.MongoInvalidArgumentError('bulkWrite only supports insertOne, updateOne, updateMany, deleteOne, deleteMany');
}
get bsonOptions() {
return this.s.bsonOptions;
}
get writeConcern() {
return this.s.writeConcern;
}
get batches() {
const batches = [...this.s.batches];
if (this.isOrdered) {
if (this.s.currentBatch)
batches.push(this.s.currentBatch);
}
else {
if (this.s.currentInsertBatch)
batches.push(this.s.currentInsertBatch);
if (this.s.currentUpdateBatch)
batches.push(this.s.currentUpdateBatch);
if (this.s.currentRemoveBatch)
batches.push(this.s.currentRemoveBatch);
}
return batches;
}
async execute(options = {}) {
if (this.s.executed) {
throw new error_1.MongoBatchReExecutionError();
}
const writeConcern = write_concern_1.WriteConcern.fromOptions(options);
if (writeConcern) {
this.s.writeConcern = writeConcern;
}
// If we have current batch
if (this.isOrdered) {
if (this.s.currentBatch)
this.s.batches.push(this.s.currentBatch);
}
else {
if (this.s.currentInsertBatch)
this.s.batches.push(this.s.currentInsertBatch);
if (this.s.currentUpdateBatch)
this.s.batches.push(this.s.currentUpdateBatch);
if (this.s.currentRemoveBatch)
this.s.batches.push(this.s.currentRemoveBatch);
}
// If we have no operations in the bulk raise an error
if (this.s.batches.length === 0) {
throw new error_1.MongoInvalidArgumentError('Invalid BulkOperation, Batch cannot be empty');
}
this.s.executed = true;
const finalOptions = { ...this.s.options, ...options };
const operation = new BulkWriteShimOperation(this, finalOptions);
return (0, execute_operation_1.executeOperation)(this.s.collection.s.db.s.client, operation);
}
/**
* Handles the write error before executing commands
* @internal
*/
handleWriteError(callback, writeResult) {
if (this.s.bulkResult.writeErrors.length > 0) {
const msg = this.s.bulkResult.writeErrors[0].errmsg
? this.s.bulkResult.writeErrors[0].errmsg
: 'write operation failed';
callback(new MongoBulkWriteError({
message: msg,
code: this.s.bulkResult.writeErrors[0].code,
writeErrors: this.s.bulkResult.writeErrors
}, writeResult));
return true;
}
const writeConcernError = writeResult.getWriteConcernError();
if (writeConcernError) {
callback(new MongoBulkWriteError(writeConcernError, writeResult));
return true;
}
return false;
}
}
exports.BulkOperationBase = BulkOperationBase;
Object.defineProperty(BulkOperationBase.prototype, 'length', {
enumerable: true,
get() {
return this.s.currentIndex;
}
});
function shouldForceServerObjectId(bulkOperation) {
if (typeof bulkOperation.s.options.forceServerObjectId === 'boolean') {
return bulkOperation.s.options.forceServerObjectId;
}
if (typeof bulkOperation.s.collection.s.db.options?.forceServerObjectId === 'boolean') {
return bulkOperation.s.collection.s.db.options?.forceServerObjectId;
}
return false;
}
function isInsertBatch(batch) {
return batch.batchType === exports.BatchType.INSERT;
}
function isUpdateBatch(batch) {
return batch.batchType === exports.BatchType.UPDATE;
}
function isDeleteBatch(batch) {
return batch.batchType === exports.BatchType.DELETE;
}
function buildCurrentOp(bulkOp) {
let { currentOp } = bulkOp.s;
bulkOp.s.currentOp = undefined;
if (!currentOp)
currentOp = {};
return currentOp;
}
//# sourceMappingURL=common.js.map

1
node_modules/mongodb/lib/bulk/common.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

67
node_modules/mongodb/lib/bulk/ordered.js generated vendored Normal file
View file

@ -0,0 +1,67 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.OrderedBulkOperation = void 0;
const BSON = require("../bson");
const error_1 = require("../error");
const common_1 = require("./common");
/** @public */
class OrderedBulkOperation extends common_1.BulkOperationBase {
/** @internal */
constructor(collection, options) {
super(collection, options, true);
}
addToOperationsList(batchType, document) {
// Get the bsonSize
const bsonSize = BSON.calculateObjectSize(document, {
checkKeys: false,
// Since we don't know what the user selected for BSON options here,
// err on the safe side, and check the size with ignoreUndefined: false.
ignoreUndefined: false
});
// Throw error if the doc is bigger than the max BSON size
if (bsonSize >= this.s.maxBsonObjectSize)
// TODO(NODE-3483): Change this to MongoBSONError
throw new error_1.MongoInvalidArgumentError(`Document is larger than the maximum size ${this.s.maxBsonObjectSize}`);
// Create a new batch object if we don't have a current one
if (this.s.currentBatch == null) {
this.s.currentBatch = new common_1.Batch(batchType, this.s.currentIndex);
}
const maxKeySize = this.s.maxKeySize;
// Check if we need to create a new batch
if (
// New batch if we exceed the max batch op size
this.s.currentBatchSize + 1 >= this.s.maxWriteBatchSize ||
// New batch if we exceed the maxBatchSizeBytes. Only matters if batch already has a doc,
// since we can't sent an empty batch
(this.s.currentBatchSize > 0 &&
this.s.currentBatchSizeBytes + maxKeySize + bsonSize >= this.s.maxBatchSizeBytes) ||
// New batch if the new op does not have the same op type as the current batch
this.s.currentBatch.batchType !== batchType) {
// Save the batch to the execution stack
this.s.batches.push(this.s.currentBatch);
// Create a new batch
this.s.currentBatch = new common_1.Batch(batchType, this.s.currentIndex);
// Reset the current size trackers
this.s.currentBatchSize = 0;
this.s.currentBatchSizeBytes = 0;
}
if (batchType === common_1.BatchType.INSERT) {
this.s.bulkResult.insertedIds.push({
index: this.s.currentIndex,
_id: document._id
});
}
// We have an array of documents
if (Array.isArray(document)) {
throw new error_1.MongoInvalidArgumentError('Operation passed in cannot be an Array');
}
this.s.currentBatch.originalIndexes.push(this.s.currentIndex);
this.s.currentBatch.operations.push(document);
this.s.currentBatchSize += 1;
this.s.currentBatchSizeBytes += maxKeySize + bsonSize;
this.s.currentIndex += 1;
return this;
}
}
exports.OrderedBulkOperation = OrderedBulkOperation;
//# sourceMappingURL=ordered.js.map

1
node_modules/mongodb/lib/bulk/ordered.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"ordered.js","sourceRoot":"","sources":["../../src/bulk/ordered.ts"],"names":[],"mappings":";;;AACA,gCAAgC;AAEhC,oCAAqD;AAGrD,qCAAiF;AAEjF,cAAc;AACd,MAAa,oBAAqB,SAAQ,0BAAiB;IACzD,gBAAgB;IAChB,YAAY,UAAsB,EAAE,OAAyB;QAC3D,KAAK,CAAC,UAAU,EAAE,OAAO,EAAE,IAAI,CAAC,CAAC;IACnC,CAAC;IAED,mBAAmB,CACjB,SAAoB,EACpB,QAAsD;QAEtD,mBAAmB;QACnB,MAAM,QAAQ,GAAG,IAAI,CAAC,mBAAmB,CAAC,QAAQ,EAAE;YAClD,SAAS,EAAE,KAAK;YAChB,oEAAoE;YACpE,wEAAwE;YACxE,eAAe,EAAE,KAAK;SAChB,CAAC,CAAC;QAEV,0DAA0D;QAC1D,IAAI,QAAQ,IAAI,IAAI,CAAC,CAAC,CAAC,iBAAiB;YACtC,iDAAiD;YACjD,MAAM,IAAI,iCAAyB,CACjC,4CAA4C,IAAI,CAAC,CAAC,CAAC,iBAAiB,EAAE,CACvE,CAAC;QAEJ,2DAA2D;QAC3D,IAAI,IAAI,CAAC,CAAC,CAAC,YAAY,IAAI,IAAI,EAAE;YAC/B,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,IAAI,cAAK,CAAC,SAAS,EAAE,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,CAAC;SACjE;QAED,MAAM,UAAU,GAAG,IAAI,CAAC,CAAC,CAAC,UAAU,CAAC;QAErC,yCAAyC;QACzC;QACE,+CAA+C;QAC/C,IAAI,CAAC,CAAC,CAAC,gBAAgB,GAAG,CAAC,IAAI,IAAI,CAAC,CAAC,CAAC,iBAAiB;YACvD,yFAAyF;YACzF,qCAAqC;YACrC,CAAC,IAAI,CAAC,CAAC,CAAC,gBAAgB,GAAG,CAAC;gBAC1B,IAAI,CAAC,CAAC,CAAC,qBAAqB,GAAG,UAAU,GAAG,QAAQ,IAAI,IAAI,CAAC,CAAC,CAAC,iBAAiB,CAAC;YACnF,8EAA8E;YAC9E,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,SAAS,KAAK,SAAS,EAC3C;YACA,wCAAwC;YACxC,IAAI,CAAC,CAAC,CAAC,OAAO,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,CAAC;YAEzC,qBAAqB;YACrB,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,IAAI,cAAK,CAAC,SAAS,EAAE,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,CAAC;YAEhE,kCAAkC;YAClC,IAAI,CAAC,CAAC,CAAC,gBAAgB,GAAG,CAAC,CAAC;YAC5B,IAAI,CAAC,CAAC,CAAC,qBAAqB,GAAG,CAAC,CAAC;SAClC;QAED,IAAI,SAAS,KAAK,kBAAS,CAAC,MAAM,EAAE;YAClC,IAAI,CAAC,CAAC,CAAC,UAAU,CAAC,WAAW,CAAC,IAAI,CAAC;gBACjC,KAAK,EAAE,IAAI,CAAC,CAAC,CAAC,YAAY;gBAC1B,GAAG,EAAG,QAAqB,CAAC,GAAG;aAChC,CAAC,CAAC;SACJ;QAED,gCAAgC;QAChC,IAAI,KAAK,CAAC,OAAO,CAAC,QAAQ,CAAC,EAAE;YAC3B,MAAM,IAAI,iCAAyB,CAAC,wCAAwC,CAAC,CAAC;SAC/E;QAED,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,eAAe,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,CAAC;QAC9D,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,UAAU,CAAC,IAAI,CAAC,QAAQ,CAAC,CAAC;QAC9C,IAAI,CAAC,CAAC,CAAC,gBAAgB,IAAI,CAAC,CAAC;QAC7B,IAAI,CAAC,CAAC,CAAC,qBAAqB,IAAI,UAAU,GAAG,QAAQ,CAAC;QACtD,IAAI,CAAC,CAAC,CAAC,YAAY,IAAI,CAAC,CAAC;QACzB,OAAO,IAAI,CAAC;IACd,CAAC;CACF;AAzED,oDAyEC"}

92
node_modules/mongodb/lib/bulk/unordered.js generated vendored Normal file
View file

@ -0,0 +1,92 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.UnorderedBulkOperation = void 0;
const BSON = require("../bson");
const error_1 = require("../error");
const common_1 = require("./common");
/** @public */
class UnorderedBulkOperation extends common_1.BulkOperationBase {
/** @internal */
constructor(collection, options) {
super(collection, options, false);
}
handleWriteError(callback, writeResult) {
if (this.s.batches.length) {
return false;
}
return super.handleWriteError(callback, writeResult);
}
addToOperationsList(batchType, document) {
// Get the bsonSize
const bsonSize = BSON.calculateObjectSize(document, {
checkKeys: false,
// Since we don't know what the user selected for BSON options here,
// err on the safe side, and check the size with ignoreUndefined: false.
ignoreUndefined: false
});
// Throw error if the doc is bigger than the max BSON size
if (bsonSize >= this.s.maxBsonObjectSize) {
// TODO(NODE-3483): Change this to MongoBSONError
throw new error_1.MongoInvalidArgumentError(`Document is larger than the maximum size ${this.s.maxBsonObjectSize}`);
}
// Holds the current batch
this.s.currentBatch = undefined;
// Get the right type of batch
if (batchType === common_1.BatchType.INSERT) {
this.s.currentBatch = this.s.currentInsertBatch;
}
else if (batchType === common_1.BatchType.UPDATE) {
this.s.currentBatch = this.s.currentUpdateBatch;
}
else if (batchType === common_1.BatchType.DELETE) {
this.s.currentBatch = this.s.currentRemoveBatch;
}
const maxKeySize = this.s.maxKeySize;
// Create a new batch object if we don't have a current one
if (this.s.currentBatch == null) {
this.s.currentBatch = new common_1.Batch(batchType, this.s.currentIndex);
}
// Check if we need to create a new batch
if (
// New batch if we exceed the max batch op size
this.s.currentBatch.size + 1 >= this.s.maxWriteBatchSize ||
// New batch if we exceed the maxBatchSizeBytes. Only matters if batch already has a doc,
// since we can't sent an empty batch
(this.s.currentBatch.size > 0 &&
this.s.currentBatch.sizeBytes + maxKeySize + bsonSize >= this.s.maxBatchSizeBytes) ||
// New batch if the new op does not have the same op type as the current batch
this.s.currentBatch.batchType !== batchType) {
// Save the batch to the execution stack
this.s.batches.push(this.s.currentBatch);
// Create a new batch
this.s.currentBatch = new common_1.Batch(batchType, this.s.currentIndex);
}
// We have an array of documents
if (Array.isArray(document)) {
throw new error_1.MongoInvalidArgumentError('Operation passed in cannot be an Array');
}
this.s.currentBatch.operations.push(document);
this.s.currentBatch.originalIndexes.push(this.s.currentIndex);
this.s.currentIndex = this.s.currentIndex + 1;
// Save back the current Batch to the right type
if (batchType === common_1.BatchType.INSERT) {
this.s.currentInsertBatch = this.s.currentBatch;
this.s.bulkResult.insertedIds.push({
index: this.s.bulkResult.insertedIds.length,
_id: document._id
});
}
else if (batchType === common_1.BatchType.UPDATE) {
this.s.currentUpdateBatch = this.s.currentBatch;
}
else if (batchType === common_1.BatchType.DELETE) {
this.s.currentRemoveBatch = this.s.currentBatch;
}
// Update current batch size
this.s.currentBatch.size += 1;
this.s.currentBatch.sizeBytes += maxKeySize + bsonSize;
return this;
}
}
exports.UnorderedBulkOperation = UnorderedBulkOperation;
//# sourceMappingURL=unordered.js.map

1
node_modules/mongodb/lib/bulk/unordered.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"unordered.js","sourceRoot":"","sources":["../../src/bulk/unordered.ts"],"names":[],"mappings":";;;AACA,gCAAgC;AAEhC,oCAAqD;AAIrD,qCAAkG;AAElG,cAAc;AACd,MAAa,sBAAuB,SAAQ,0BAAiB;IAC3D,gBAAgB;IAChB,YAAY,UAAsB,EAAE,OAAyB;QAC3D,KAAK,CAAC,UAAU,EAAE,OAAO,EAAE,KAAK,CAAC,CAAC;IACpC,CAAC;IAEQ,gBAAgB,CAAC,QAAkB,EAAE,WAA4B;QACxE,IAAI,IAAI,CAAC,CAAC,CAAC,OAAO,CAAC,MAAM,EAAE;YACzB,OAAO,KAAK,CAAC;SACd;QAED,OAAO,KAAK,CAAC,gBAAgB,CAAC,QAAQ,EAAE,WAAW,CAAC,CAAC;IACvD,CAAC;IAED,mBAAmB,CACjB,SAAoB,EACpB,QAAsD;QAEtD,mBAAmB;QACnB,MAAM,QAAQ,GAAG,IAAI,CAAC,mBAAmB,CAAC,QAAQ,EAAE;YAClD,SAAS,EAAE,KAAK;YAEhB,oEAAoE;YACpE,wEAAwE;YACxE,eAAe,EAAE,KAAK;SAChB,CAAC,CAAC;QAEV,0DAA0D;QAC1D,IAAI,QAAQ,IAAI,IAAI,CAAC,CAAC,CAAC,iBAAiB,EAAE;YACxC,iDAAiD;YACjD,MAAM,IAAI,iCAAyB,CACjC,4CAA4C,IAAI,CAAC,CAAC,CAAC,iBAAiB,EAAE,CACvE,CAAC;SACH;QAED,0BAA0B;QAC1B,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,SAAS,CAAC;QAChC,8BAA8B;QAC9B,IAAI,SAAS,KAAK,kBAAS,CAAC,MAAM,EAAE;YAClC,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,IAAI,CAAC,CAAC,CAAC,kBAAkB,CAAC;SACjD;aAAM,IAAI,SAAS,KAAK,kBAAS,CAAC,MAAM,EAAE;YACzC,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,IAAI,CAAC,CAAC,CAAC,kBAAkB,CAAC;SACjD;aAAM,IAAI,SAAS,KAAK,kBAAS,CAAC,MAAM,EAAE;YACzC,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,IAAI,CAAC,CAAC,CAAC,kBAAkB,CAAC;SACjD;QAED,MAAM,UAAU,GAAG,IAAI,CAAC,CAAC,CAAC,UAAU,CAAC;QAErC,2DAA2D;QAC3D,IAAI,IAAI,CAAC,CAAC,CAAC,YAAY,IAAI,IAAI,EAAE;YAC/B,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,IAAI,cAAK,CAAC,SAAS,EAAE,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,CAAC;SACjE;QAED,yCAAyC;QACzC;QACE,+CAA+C;QAC/C,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,IAAI,GAAG,CAAC,IAAI,IAAI,CAAC,CAAC,CAAC,iBAAiB;YACxD,yFAAyF;YACzF,qCAAqC;YACrC,CAAC,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,IAAI,GAAG,CAAC;gBAC3B,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,SAAS,GAAG,UAAU,GAAG,QAAQ,IAAI,IAAI,CAAC,CAAC,CAAC,iBAAiB,CAAC;YACpF,8EAA8E;YAC9E,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,SAAS,KAAK,SAAS,EAC3C;YACA,wCAAwC;YACxC,IAAI,CAAC,CAAC,CAAC,OAAO,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,CAAC;YAEzC,qBAAqB;YACrB,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,IAAI,cAAK,CAAC,SAAS,EAAE,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,CAAC;SACjE;QAED,gCAAgC;QAChC,IAAI,KAAK,CAAC,OAAO,CAAC,QAAQ,CAAC,EAAE;YAC3B,MAAM,IAAI,iCAAyB,CAAC,wCAAwC,CAAC,CAAC;SAC/E;QAED,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,UAAU,CAAC,IAAI,CAAC,QAAQ,CAAC,CAAC;QAC9C,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,eAAe,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,CAAC;QAC9D,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,CAAC,CAAC;QAE9C,gDAAgD;QAChD,IAAI,SAAS,KAAK,kBAAS,CAAC,MAAM,EAAE;YAClC,IAAI,CAAC,CAAC,CAAC,kBAAkB,GAAG,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC;YAChD,IAAI,CAAC,CAAC,CAAC,UAAU,CAAC,WAAW,CAAC,IAAI,CAAC;gBACjC,KAAK,EAAE,IAAI,CAAC,CAAC,CAAC,UAAU,CAAC,WAAW,CAAC,MAAM;gBAC3C,GAAG,EAAG,QAAqB,CAAC,GAAG;aAChC,CAAC,CAAC;SACJ;aAAM,IAAI,SAAS,KAAK,kBAAS,CAAC,MAAM,EAAE;YACzC,IAAI,CAAC,CAAC,CAAC,kBAAkB,GAAG,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC;SACjD;aAAM,IAAI,SAAS,KAAK,kBAAS,CAAC,MAAM,EAAE;YACzC,IAAI,CAAC,CAAC,CAAC,kBAAkB,GAAG,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC;SACjD;QAED,4BAA4B;QAC5B,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,IAAI,IAAI,CAAC,CAAC;QAC9B,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,SAAS,IAAI,UAAU,GAAG,QAAQ,CAAC;QAEvD,OAAO,IAAI,CAAC;IACd,CAAC;CACF;AAnGD,wDAmGC"}

397
node_modules/mongodb/lib/change_stream.js generated vendored Normal file
View file

@ -0,0 +1,397 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.ChangeStream = void 0;
const collection_1 = require("./collection");
const constants_1 = require("./constants");
const change_stream_cursor_1 = require("./cursor/change_stream_cursor");
const db_1 = require("./db");
const error_1 = require("./error");
const mongo_client_1 = require("./mongo_client");
const mongo_types_1 = require("./mongo_types");
const utils_1 = require("./utils");
/** @internal */
const kCursorStream = Symbol('cursorStream');
/** @internal */
const kClosed = Symbol('closed');
/** @internal */
const kMode = Symbol('mode');
const CHANGE_STREAM_OPTIONS = [
'resumeAfter',
'startAfter',
'startAtOperationTime',
'fullDocument',
'fullDocumentBeforeChange',
'showExpandedEvents'
];
const CHANGE_DOMAIN_TYPES = {
COLLECTION: Symbol('Collection'),
DATABASE: Symbol('Database'),
CLUSTER: Symbol('Cluster')
};
const CHANGE_STREAM_EVENTS = [constants_1.RESUME_TOKEN_CHANGED, constants_1.END, constants_1.CLOSE];
const NO_RESUME_TOKEN_ERROR = 'A change stream document has been received that lacks a resume token (_id).';
const CHANGESTREAM_CLOSED_ERROR = 'ChangeStream is closed';
/**
* Creates a new Change Stream instance. Normally created using {@link Collection#watch|Collection.watch()}.
* @public
*/
class ChangeStream extends mongo_types_1.TypedEventEmitter {
/**
* @internal
*
* @param parent - The parent object that created this change stream
* @param pipeline - An array of {@link https://docs.mongodb.com/manual/reference/operator/aggregation-pipeline/|aggregation pipeline stages} through which to pass change stream documents
*/
constructor(parent, pipeline = [], options = {}) {
super();
this.pipeline = pipeline;
this.options = { ...options };
delete this.options.writeConcern;
if (parent instanceof collection_1.Collection) {
this.type = CHANGE_DOMAIN_TYPES.COLLECTION;
}
else if (parent instanceof db_1.Db) {
this.type = CHANGE_DOMAIN_TYPES.DATABASE;
}
else if (parent instanceof mongo_client_1.MongoClient) {
this.type = CHANGE_DOMAIN_TYPES.CLUSTER;
}
else {
throw new error_1.MongoChangeStreamError('Parent provided to ChangeStream constructor must be an instance of Collection, Db, or MongoClient');
}
this.parent = parent;
this.namespace = parent.s.namespace;
if (!this.options.readPreference && parent.readPreference) {
this.options.readPreference = parent.readPreference;
}
// Create contained Change Stream cursor
this.cursor = this._createChangeStreamCursor(options);
this[kClosed] = false;
this[kMode] = false;
// Listen for any `change` listeners being added to ChangeStream
this.on('newListener', eventName => {
if (eventName === 'change' && this.cursor && this.listenerCount('change') === 0) {
this._streamEvents(this.cursor);
}
});
this.on('removeListener', eventName => {
if (eventName === 'change' && this.listenerCount('change') === 0 && this.cursor) {
this[kCursorStream]?.removeAllListeners('data');
}
});
}
/** @internal */
get cursorStream() {
return this[kCursorStream];
}
/** The cached resume token that is used to resume after the most recently returned change. */
get resumeToken() {
return this.cursor?.resumeToken;
}
/** Check if there is any document still available in the Change Stream */
async hasNext() {
this._setIsIterator();
// Change streams must resume indefinitely while each resume event succeeds.
// This loop continues until either a change event is received or until a resume attempt
// fails.
// eslint-disable-next-line no-constant-condition
while (true) {
try {
const hasNext = await this.cursor.hasNext();
return hasNext;
}
catch (error) {
try {
await this._processErrorIteratorMode(error);
}
catch (error) {
try {
await this.close();
}
catch {
// We are not concerned with errors from close()
}
throw error;
}
}
}
}
/** Get the next available document from the Change Stream. */
async next() {
this._setIsIterator();
// Change streams must resume indefinitely while each resume event succeeds.
// This loop continues until either a change event is received or until a resume attempt
// fails.
// eslint-disable-next-line no-constant-condition
while (true) {
try {
const change = await this.cursor.next();
const processedChange = this._processChange(change ?? null);
return processedChange;
}
catch (error) {
try {
await this._processErrorIteratorMode(error);
}
catch (error) {
try {
await this.close();
}
catch {
// We are not concerned with errors from close()
}
throw error;
}
}
}
}
/**
* Try to get the next available document from the Change Stream's cursor or `null` if an empty batch is returned
*/
async tryNext() {
this._setIsIterator();
// Change streams must resume indefinitely while each resume event succeeds.
// This loop continues until either a change event is received or until a resume attempt
// fails.
// eslint-disable-next-line no-constant-condition
while (true) {
try {
const change = await this.cursor.tryNext();
return change ?? null;
}
catch (error) {
try {
await this._processErrorIteratorMode(error);
}
catch (error) {
try {
await this.close();
}
catch {
// We are not concerned with errors from close()
}
throw error;
}
}
}
}
async *[Symbol.asyncIterator]() {
if (this.closed) {
return;
}
try {
// Change streams run indefinitely as long as errors are resumable
// So the only loop breaking condition is if `next()` throws
while (true) {
yield await this.next();
}
}
finally {
try {
await this.close();
}
catch {
// we're not concerned with errors from close()
}
}
}
/** Is the cursor closed */
get closed() {
return this[kClosed] || this.cursor.closed;
}
/** Close the Change Stream */
async close() {
this[kClosed] = true;
const cursor = this.cursor;
try {
await cursor.close();
}
finally {
this._endStream();
}
}
/**
* Return a modified Readable stream including a possible transform method.
*
* NOTE: When using a Stream to process change stream events, the stream will
* NOT automatically resume in the case a resumable error is encountered.
*
* @throws MongoChangeStreamError if the underlying cursor or the change stream is closed
*/
stream(options) {
if (this.closed) {
throw new error_1.MongoChangeStreamError(CHANGESTREAM_CLOSED_ERROR);
}
this.streamOptions = options;
return this.cursor.stream(options);
}
/** @internal */
_setIsEmitter() {
if (this[kMode] === 'iterator') {
// TODO(NODE-3485): Replace with MongoChangeStreamModeError
throw new error_1.MongoAPIError('ChangeStream cannot be used as an EventEmitter after being used as an iterator');
}
this[kMode] = 'emitter';
}
/** @internal */
_setIsIterator() {
if (this[kMode] === 'emitter') {
// TODO(NODE-3485): Replace with MongoChangeStreamModeError
throw new error_1.MongoAPIError('ChangeStream cannot be used as an iterator after being used as an EventEmitter');
}
this[kMode] = 'iterator';
}
/**
* Create a new change stream cursor based on self's configuration
* @internal
*/
_createChangeStreamCursor(options) {
const changeStreamStageOptions = (0, utils_1.filterOptions)(options, CHANGE_STREAM_OPTIONS);
if (this.type === CHANGE_DOMAIN_TYPES.CLUSTER) {
changeStreamStageOptions.allChangesForCluster = true;
}
const pipeline = [{ $changeStream: changeStreamStageOptions }, ...this.pipeline];
const client = this.type === CHANGE_DOMAIN_TYPES.CLUSTER
? this.parent
: this.type === CHANGE_DOMAIN_TYPES.DATABASE
? this.parent.s.client
: this.type === CHANGE_DOMAIN_TYPES.COLLECTION
? this.parent.s.db.s.client
: null;
if (client == null) {
// This should never happen because of the assertion in the constructor
throw new error_1.MongoRuntimeError(`Changestream type should only be one of cluster, database, collection. Found ${this.type.toString()}`);
}
const changeStreamCursor = new change_stream_cursor_1.ChangeStreamCursor(client, this.namespace, pipeline, options);
for (const event of CHANGE_STREAM_EVENTS) {
changeStreamCursor.on(event, e => this.emit(event, e));
}
if (this.listenerCount(ChangeStream.CHANGE) > 0) {
this._streamEvents(changeStreamCursor);
}
return changeStreamCursor;
}
/** @internal */
_closeEmitterModeWithError(error) {
this.emit(ChangeStream.ERROR, error);
this.close().catch(() => null);
}
/** @internal */
_streamEvents(cursor) {
this._setIsEmitter();
const stream = this[kCursorStream] ?? cursor.stream();
this[kCursorStream] = stream;
stream.on('data', change => {
try {
const processedChange = this._processChange(change);
this.emit(ChangeStream.CHANGE, processedChange);
}
catch (error) {
this.emit(ChangeStream.ERROR, error);
}
});
stream.on('error', error => this._processErrorStreamMode(error));
}
/** @internal */
_endStream() {
const cursorStream = this[kCursorStream];
if (cursorStream) {
['data', 'close', 'end', 'error'].forEach(event => cursorStream.removeAllListeners(event));
cursorStream.destroy();
}
this[kCursorStream] = undefined;
}
/** @internal */
_processChange(change) {
if (this[kClosed]) {
// TODO(NODE-3485): Replace with MongoChangeStreamClosedError
throw new error_1.MongoAPIError(CHANGESTREAM_CLOSED_ERROR);
}
// a null change means the cursor has been notified, implicitly closing the change stream
if (change == null) {
// TODO(NODE-3485): Replace with MongoChangeStreamClosedError
throw new error_1.MongoRuntimeError(CHANGESTREAM_CLOSED_ERROR);
}
if (change && !change._id) {
throw new error_1.MongoChangeStreamError(NO_RESUME_TOKEN_ERROR);
}
// cache the resume token
this.cursor.cacheResumeToken(change._id);
// wipe the startAtOperationTime if there was one so that there won't be a conflict
// between resumeToken and startAtOperationTime if we need to reconnect the cursor
this.options.startAtOperationTime = undefined;
return change;
}
/** @internal */
_processErrorStreamMode(changeStreamError) {
// If the change stream has been closed explicitly, do not process error.
if (this[kClosed])
return;
if ((0, error_1.isResumableError)(changeStreamError, this.cursor.maxWireVersion)) {
this._endStream();
this.cursor.close().catch(() => null);
const topology = (0, utils_1.getTopology)(this.parent);
topology.selectServer(this.cursor.readPreference, {}, serverSelectionError => {
if (serverSelectionError)
return this._closeEmitterModeWithError(changeStreamError);
this.cursor = this._createChangeStreamCursor(this.cursor.resumeOptions);
});
}
else {
this._closeEmitterModeWithError(changeStreamError);
}
}
/** @internal */
async _processErrorIteratorMode(changeStreamError) {
if (this[kClosed]) {
// TODO(NODE-3485): Replace with MongoChangeStreamClosedError
throw new error_1.MongoAPIError(CHANGESTREAM_CLOSED_ERROR);
}
if (!(0, error_1.isResumableError)(changeStreamError, this.cursor.maxWireVersion)) {
try {
await this.close();
}
catch {
// ignore errors from close
}
throw changeStreamError;
}
await this.cursor.close().catch(() => null);
const topology = (0, utils_1.getTopology)(this.parent);
try {
await topology.selectServerAsync(this.cursor.readPreference, {});
this.cursor = this._createChangeStreamCursor(this.cursor.resumeOptions);
}
catch {
// if the topology can't reconnect, close the stream
await this.close();
throw changeStreamError;
}
}
}
exports.ChangeStream = ChangeStream;
/** @event */
ChangeStream.RESPONSE = constants_1.RESPONSE;
/** @event */
ChangeStream.MORE = constants_1.MORE;
/** @event */
ChangeStream.INIT = constants_1.INIT;
/** @event */
ChangeStream.CLOSE = constants_1.CLOSE;
/**
* Fired for each new matching change in the specified namespace. Attaching a `change`
* event listener to a Change Stream will switch the stream into flowing mode. Data will
* then be passed as soon as it is available.
* @event
*/
ChangeStream.CHANGE = constants_1.CHANGE;
/** @event */
ChangeStream.END = constants_1.END;
/** @event */
ChangeStream.ERROR = constants_1.ERROR;
/**
* Emitted each time the change stream stores a new resume token.
* @event
*/
ChangeStream.RESUME_TOKEN_CHANGED = constants_1.RESUME_TOKEN_CHANGED;
//# sourceMappingURL=change_stream.js.map

1
node_modules/mongodb/lib/change_stream.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

36
node_modules/mongodb/lib/cmap/auth/auth_provider.js generated vendored Normal file
View file

@ -0,0 +1,36 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.AuthProvider = exports.AuthContext = void 0;
const error_1 = require("../../error");
/** Context used during authentication */
class AuthContext {
constructor(connection, credentials, options) {
this.connection = connection;
this.credentials = credentials;
this.options = options;
}
}
exports.AuthContext = AuthContext;
class AuthProvider {
/**
* Prepare the handshake document before the initial handshake.
*
* @param handshakeDoc - The document used for the initial handshake on a connection
* @param authContext - Context for authentication flow
*/
prepare(handshakeDoc, authContext, callback) {
callback(undefined, handshakeDoc);
}
/**
* Authenticate
*
* @param context - A shared context for authentication flow
* @param callback - The callback to return the result from the authentication
*/
auth(context, callback) {
// TODO(NODE-3483): Replace this with MongoMethodOverrideError
callback(new error_1.MongoRuntimeError('`auth` method must be overridden by subclass'));
}
}
exports.AuthProvider = AuthProvider;
//# sourceMappingURL=auth_provider.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"auth_provider.js","sourceRoot":"","sources":["../../../src/cmap/auth/auth_provider.ts"],"names":[],"mappings":";;;AACA,uCAAgD;AAQhD,yCAAyC;AACzC,MAAa,WAAW;IAatB,YACE,UAAsB,EACtB,WAAyC,EACzC,OAA2B;QAE3B,IAAI,CAAC,UAAU,GAAG,UAAU,CAAC;QAC7B,IAAI,CAAC,WAAW,GAAG,WAAW,CAAC;QAC/B,IAAI,CAAC,OAAO,GAAG,OAAO,CAAC;IACzB,CAAC;CACF;AAtBD,kCAsBC;AAED,MAAa,YAAY;IACvB;;;;;OAKG;IACH,OAAO,CACL,YAA+B,EAC/B,WAAwB,EACxB,QAAqC;QAErC,QAAQ,CAAC,SAAS,EAAE,YAAY,CAAC,CAAC;IACpC,CAAC;IAED;;;;;OAKG;IACH,IAAI,CAAC,OAAoB,EAAE,QAAkB;QAC3C,8DAA8D;QAC9D,QAAQ,CAAC,IAAI,yBAAiB,CAAC,8CAA8C,CAAC,CAAC,CAAC;IAClF,CAAC;CACF;AAzBD,oCAyBC"}

188
node_modules/mongodb/lib/cmap/auth/gssapi.js generated vendored Normal file
View file

@ -0,0 +1,188 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.resolveCname = exports.performGSSAPICanonicalizeHostName = exports.GSSAPI = exports.GSSAPICanonicalizationValue = void 0;
const dns = require("dns");
const deps_1 = require("../../deps");
const error_1 = require("../../error");
const utils_1 = require("../../utils");
const auth_provider_1 = require("./auth_provider");
/** @public */
exports.GSSAPICanonicalizationValue = Object.freeze({
on: true,
off: false,
none: 'none',
forward: 'forward',
forwardAndReverse: 'forwardAndReverse'
});
class GSSAPI extends auth_provider_1.AuthProvider {
auth(authContext, callback) {
const { connection, credentials } = authContext;
if (credentials == null)
return callback(new error_1.MongoMissingCredentialsError('Credentials required for GSSAPI authentication'));
const { username } = credentials;
function externalCommand(command, cb) {
return connection.command((0, utils_1.ns)('$external.$cmd'), command, undefined, cb);
}
makeKerberosClient(authContext, (err, client) => {
if (err)
return callback(err);
if (client == null)
return callback(new error_1.MongoMissingDependencyError('GSSAPI client missing'));
client.step('', (err, payload) => {
if (err)
return callback(err);
externalCommand(saslStart(payload), (err, result) => {
if (err)
return callback(err);
if (result == null)
return callback();
negotiate(client, 10, result.payload, (err, payload) => {
if (err)
return callback(err);
externalCommand(saslContinue(payload, result.conversationId), (err, result) => {
if (err)
return callback(err);
if (result == null)
return callback();
finalize(client, username, result.payload, (err, payload) => {
if (err)
return callback(err);
externalCommand({
saslContinue: 1,
conversationId: result.conversationId,
payload
}, (err, result) => {
if (err)
return callback(err);
callback(undefined, result);
});
});
});
});
});
});
});
}
}
exports.GSSAPI = GSSAPI;
function makeKerberosClient(authContext, callback) {
const { hostAddress } = authContext.options;
const { credentials } = authContext;
if (!hostAddress || typeof hostAddress.host !== 'string' || !credentials) {
return callback(new error_1.MongoInvalidArgumentError('Connection must have host and port and credentials defined.'));
}
if ('kModuleError' in deps_1.Kerberos) {
return callback(deps_1.Kerberos['kModuleError']);
}
const { initializeClient } = deps_1.Kerberos;
const { username, password } = credentials;
const mechanismProperties = credentials.mechanismProperties;
const serviceName = mechanismProperties.SERVICE_NAME ?? 'mongodb';
performGSSAPICanonicalizeHostName(hostAddress.host, mechanismProperties, (err, host) => {
if (err)
return callback(err);
const initOptions = {};
if (password != null) {
Object.assign(initOptions, { user: username, password: password });
}
const spnHost = mechanismProperties.SERVICE_HOST ?? host;
let spn = `${serviceName}${process.platform === 'win32' ? '/' : '@'}${spnHost}`;
if ('SERVICE_REALM' in mechanismProperties) {
spn = `${spn}@${mechanismProperties.SERVICE_REALM}`;
}
initializeClient(spn, initOptions, (err, client) => {
// TODO(NODE-3483)
if (err)
return callback(new error_1.MongoRuntimeError(err));
callback(undefined, client);
});
});
}
function saslStart(payload) {
return {
saslStart: 1,
mechanism: 'GSSAPI',
payload,
autoAuthorize: 1
};
}
function saslContinue(payload, conversationId) {
return {
saslContinue: 1,
conversationId,
payload
};
}
function negotiate(client, retries, payload, callback) {
client.step(payload, (err, response) => {
// Retries exhausted, raise error
if (err && retries === 0)
return callback(err);
// Adjust number of retries and call step again
if (err)
return negotiate(client, retries - 1, payload, callback);
// Return the payload
callback(undefined, response || '');
});
}
function finalize(client, user, payload, callback) {
// GSS Client Unwrap
client.unwrap(payload, (err, response) => {
if (err)
return callback(err);
// Wrap the response
client.wrap(response || '', { user }, (err, wrapped) => {
if (err)
return callback(err);
// Return the payload
callback(undefined, wrapped);
});
});
}
function performGSSAPICanonicalizeHostName(host, mechanismProperties, callback) {
const mode = mechanismProperties.CANONICALIZE_HOST_NAME;
if (!mode || mode === exports.GSSAPICanonicalizationValue.none) {
return callback(undefined, host);
}
// If forward and reverse or true
if (mode === exports.GSSAPICanonicalizationValue.on ||
mode === exports.GSSAPICanonicalizationValue.forwardAndReverse) {
// Perform the lookup of the ip address.
dns.lookup(host, (error, address) => {
// No ip found, return the error.
if (error)
return callback(error);
// Perform a reverse ptr lookup on the ip address.
dns.resolvePtr(address, (err, results) => {
// This can error as ptr records may not exist for all ips. In this case
// fallback to a cname lookup as dns.lookup() does not return the
// cname.
if (err) {
return resolveCname(host, callback);
}
// If the ptr did not error but had no results, return the host.
callback(undefined, results.length > 0 ? results[0] : host);
});
});
}
else {
// The case for forward is just to resolve the cname as dns.lookup()
// will not return it.
resolveCname(host, callback);
}
}
exports.performGSSAPICanonicalizeHostName = performGSSAPICanonicalizeHostName;
function resolveCname(host, callback) {
// Attempt to resolve the host name
dns.resolveCname(host, (err, r) => {
if (err)
return callback(undefined, host);
// Get the first resolve host id
if (r.length > 0) {
return callback(undefined, r[0]);
}
callback(undefined, host);
});
}
exports.resolveCname = resolveCname;
//# sourceMappingURL=gssapi.js.map

1
node_modules/mongodb/lib/cmap/auth/gssapi.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

145
node_modules/mongodb/lib/cmap/auth/mongo_credentials.js generated vendored Normal file
View file

@ -0,0 +1,145 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.MongoCredentials = void 0;
const error_1 = require("../../error");
const gssapi_1 = require("./gssapi");
const providers_1 = require("./providers");
// https://github.com/mongodb/specifications/blob/master/source/auth/auth.rst
function getDefaultAuthMechanism(hello) {
if (hello) {
// If hello contains saslSupportedMechs, use scram-sha-256
// if it is available, else scram-sha-1
if (Array.isArray(hello.saslSupportedMechs)) {
return hello.saslSupportedMechs.includes(providers_1.AuthMechanism.MONGODB_SCRAM_SHA256)
? providers_1.AuthMechanism.MONGODB_SCRAM_SHA256
: providers_1.AuthMechanism.MONGODB_SCRAM_SHA1;
}
// Fallback to legacy selection method. If wire version >= 3, use scram-sha-1
if (hello.maxWireVersion >= 3) {
return providers_1.AuthMechanism.MONGODB_SCRAM_SHA1;
}
}
// Default for wireprotocol < 3
return providers_1.AuthMechanism.MONGODB_CR;
}
/**
* A representation of the credentials used by MongoDB
* @public
*/
class MongoCredentials {
constructor(options) {
this.username = options.username;
this.password = options.password;
this.source = options.source;
if (!this.source && options.db) {
this.source = options.db;
}
this.mechanism = options.mechanism || providers_1.AuthMechanism.MONGODB_DEFAULT;
this.mechanismProperties = options.mechanismProperties || {};
if (this.mechanism.match(/MONGODB-AWS/i)) {
if (!this.username && process.env.AWS_ACCESS_KEY_ID) {
this.username = process.env.AWS_ACCESS_KEY_ID;
}
if (!this.password && process.env.AWS_SECRET_ACCESS_KEY) {
this.password = process.env.AWS_SECRET_ACCESS_KEY;
}
if (this.mechanismProperties.AWS_SESSION_TOKEN == null &&
process.env.AWS_SESSION_TOKEN != null) {
this.mechanismProperties = {
...this.mechanismProperties,
AWS_SESSION_TOKEN: process.env.AWS_SESSION_TOKEN
};
}
}
Object.freeze(this.mechanismProperties);
Object.freeze(this);
}
/** Determines if two MongoCredentials objects are equivalent */
equals(other) {
return (this.mechanism === other.mechanism &&
this.username === other.username &&
this.password === other.password &&
this.source === other.source);
}
/**
* If the authentication mechanism is set to "default", resolves the authMechanism
* based on the server version and server supported sasl mechanisms.
*
* @param hello - A hello response from the server
*/
resolveAuthMechanism(hello) {
// If the mechanism is not "default", then it does not need to be resolved
if (this.mechanism.match(/DEFAULT/i)) {
return new MongoCredentials({
username: this.username,
password: this.password,
source: this.source,
mechanism: getDefaultAuthMechanism(hello),
mechanismProperties: this.mechanismProperties
});
}
return this;
}
validate() {
if ((this.mechanism === providers_1.AuthMechanism.MONGODB_GSSAPI ||
this.mechanism === providers_1.AuthMechanism.MONGODB_CR ||
this.mechanism === providers_1.AuthMechanism.MONGODB_PLAIN ||
this.mechanism === providers_1.AuthMechanism.MONGODB_SCRAM_SHA1 ||
this.mechanism === providers_1.AuthMechanism.MONGODB_SCRAM_SHA256) &&
!this.username) {
throw new error_1.MongoMissingCredentialsError(`Username required for mechanism '${this.mechanism}'`);
}
if (this.mechanism === providers_1.AuthMechanism.MONGODB_OIDC) {
if (this.username) {
throw new error_1.MongoInvalidArgumentError(`Username not permitted for mechanism '${this.mechanism}'. Use PRINCIPAL_NAME instead.`);
}
if (this.mechanismProperties.PRINCIPAL_NAME && this.mechanismProperties.DEVICE_NAME) {
throw new error_1.MongoInvalidArgumentError(`PRINCIPAL_NAME and DEVICE_NAME may not be used together for mechanism '${this.mechanism}'.`);
}
if (this.mechanismProperties.DEVICE_NAME && this.mechanismProperties.DEVICE_NAME !== 'aws') {
throw new error_1.MongoInvalidArgumentError(`Currently only a DEVICE_NAME of 'aws' is supported for mechanism '${this.mechanism}'.`);
}
if (this.mechanismProperties.REFRESH_TOKEN_CALLBACK &&
!this.mechanismProperties.REQUEST_TOKEN_CALLBACK) {
throw new error_1.MongoInvalidArgumentError(`A REQUEST_TOKEN_CALLBACK must be provided when using a REFRESH_TOKEN_CALLBACK for mechanism '${this.mechanism}'`);
}
if (!this.mechanismProperties.DEVICE_NAME &&
!this.mechanismProperties.REQUEST_TOKEN_CALLBACK) {
throw new error_1.MongoInvalidArgumentError(`Either a DEVICE_NAME or a REQUEST_TOKEN_CALLBACK must be specified for mechanism '${this.mechanism}'.`);
}
}
if (providers_1.AUTH_MECHS_AUTH_SRC_EXTERNAL.has(this.mechanism)) {
if (this.source != null && this.source !== '$external') {
// TODO(NODE-3485): Replace this with a MongoAuthValidationError
throw new error_1.MongoAPIError(`Invalid source '${this.source}' for mechanism '${this.mechanism}' specified.`);
}
}
if (this.mechanism === providers_1.AuthMechanism.MONGODB_PLAIN && this.source == null) {
// TODO(NODE-3485): Replace this with a MongoAuthValidationError
throw new error_1.MongoAPIError('PLAIN Authentication Mechanism needs an auth source');
}
if (this.mechanism === providers_1.AuthMechanism.MONGODB_X509 && this.password != null) {
if (this.password === '') {
Reflect.set(this, 'password', undefined);
return;
}
// TODO(NODE-3485): Replace this with a MongoAuthValidationError
throw new error_1.MongoAPIError(`Password not allowed for mechanism MONGODB-X509`);
}
const canonicalization = this.mechanismProperties.CANONICALIZE_HOST_NAME ?? false;
if (!Object.values(gssapi_1.GSSAPICanonicalizationValue).includes(canonicalization)) {
throw new error_1.MongoAPIError(`Invalid CANONICALIZE_HOST_NAME value: ${canonicalization}`);
}
}
static merge(creds, options) {
return new MongoCredentials({
username: options.username ?? creds?.username ?? '',
password: options.password ?? creds?.password ?? '',
mechanism: options.mechanism ?? creds?.mechanism ?? providers_1.AuthMechanism.MONGODB_DEFAULT,
mechanismProperties: options.mechanismProperties ?? creds?.mechanismProperties ?? {},
source: options.source ?? options.db ?? creds?.source ?? 'admin'
});
}
}
exports.MongoCredentials = MongoCredentials;
//# sourceMappingURL=mongo_credentials.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"mongo_credentials.js","sourceRoot":"","sources":["../../../src/cmap/auth/mongo_credentials.ts"],"names":[],"mappings":";;;AAEA,uCAIqB;AACrB,qCAAuD;AAEvD,2CAA0E;AAE1E,6EAA6E;AAC7E,SAAS,uBAAuB,CAAC,KAAgB;IAC/C,IAAI,KAAK,EAAE;QACT,0DAA0D;QAC1D,uCAAuC;QACvC,IAAI,KAAK,CAAC,OAAO,CAAC,KAAK,CAAC,kBAAkB,CAAC,EAAE;YAC3C,OAAO,KAAK,CAAC,kBAAkB,CAAC,QAAQ,CAAC,yBAAa,CAAC,oBAAoB,CAAC;gBAC1E,CAAC,CAAC,yBAAa,CAAC,oBAAoB;gBACpC,CAAC,CAAC,yBAAa,CAAC,kBAAkB,CAAC;SACtC;QAED,6EAA6E;QAC7E,IAAI,KAAK,CAAC,cAAc,IAAI,CAAC,EAAE;YAC7B,OAAO,yBAAa,CAAC,kBAAkB,CAAC;SACzC;KACF;IAED,+BAA+B;IAC/B,OAAO,yBAAa,CAAC,UAAU,CAAC;AAClC,CAAC;AAiCD;;;GAGG;AACH,MAAa,gBAAgB;IAY3B,YAAY,OAAgC;QAC1C,IAAI,CAAC,QAAQ,GAAG,OAAO,CAAC,QAAQ,CAAC;QACjC,IAAI,CAAC,QAAQ,GAAG,OAAO,CAAC,QAAQ,CAAC;QACjC,IAAI,CAAC,MAAM,GAAG,OAAO,CAAC,MAAM,CAAC;QAC7B,IAAI,CAAC,IAAI,CAAC,MAAM,IAAI,OAAO,CAAC,EAAE,EAAE;YAC9B,IAAI,CAAC,MAAM,GAAG,OAAO,CAAC,EAAE,CAAC;SAC1B;QACD,IAAI,CAAC,SAAS,GAAG,OAAO,CAAC,SAAS,IAAI,yBAAa,CAAC,eAAe,CAAC;QACpE,IAAI,CAAC,mBAAmB,GAAG,OAAO,CAAC,mBAAmB,IAAI,EAAE,CAAC;QAE7D,IAAI,IAAI,CAAC,SAAS,CAAC,KAAK,CAAC,cAAc,CAAC,EAAE;YACxC,IAAI,CAAC,IAAI,CAAC,QAAQ,IAAI,OAAO,CAAC,GAAG,CAAC,iBAAiB,EAAE;gBACnD,IAAI,CAAC,QAAQ,GAAG,OAAO,CAAC,GAAG,CAAC,iBAAiB,CAAC;aAC/C;YAED,IAAI,CAAC,IAAI,CAAC,QAAQ,IAAI,OAAO,CAAC,GAAG,CAAC,qBAAqB,EAAE;gBACvD,IAAI,CAAC,QAAQ,GAAG,OAAO,CAAC,GAAG,CAAC,qBAAqB,CAAC;aACnD;YAED,IACE,IAAI,CAAC,mBAAmB,CAAC,iBAAiB,IAAI,IAAI;gBAClD,OAAO,CAAC,GAAG,CAAC,iBAAiB,IAAI,IAAI,EACrC;gBACA,IAAI,CAAC,mBAAmB,GAAG;oBACzB,GAAG,IAAI,CAAC,mBAAmB;oBAC3B,iBAAiB,EAAE,OAAO,CAAC,GAAG,CAAC,iBAAiB;iBACjD,CAAC;aACH;SACF;QAED,MAAM,CAAC,MAAM,CAAC,IAAI,CAAC,mBAAmB,CAAC,CAAC;QACxC,MAAM,CAAC,MAAM,CAAC,IAAI,CAAC,CAAC;IACtB,CAAC;IAED,gEAAgE;IAChE,MAAM,CAAC,KAAuB;QAC5B,OAAO,CACL,IAAI,CAAC,SAAS,KAAK,KAAK,CAAC,SAAS;YAClC,IAAI,CAAC,QAAQ,KAAK,KAAK,CAAC,QAAQ;YAChC,IAAI,CAAC,QAAQ,KAAK,KAAK,CAAC,QAAQ;YAChC,IAAI,CAAC,MAAM,KAAK,KAAK,CAAC,MAAM,CAC7B,CAAC;IACJ,CAAC;IAED;;;;;OAKG;IACH,oBAAoB,CAAC,KAAgB;QACnC,0EAA0E;QAC1E,IAAI,IAAI,CAAC,SAAS,CAAC,KAAK,CAAC,UAAU,CAAC,EAAE;YACpC,OAAO,IAAI,gBAAgB,CAAC;gBAC1B,QAAQ,EAAE,IAAI,CAAC,QAAQ;gBACvB,QAAQ,EAAE,IAAI,CAAC,QAAQ;gBACvB,MAAM,EAAE,IAAI,CAAC,MAAM;gBACnB,SAAS,EAAE,uBAAuB,CAAC,KAAK,CAAC;gBACzC,mBAAmB,EAAE,IAAI,CAAC,mBAAmB;aAC9C,CAAC,CAAC;SACJ;QAED,OAAO,IAAI,CAAC;IACd,CAAC;IAED,QAAQ;QACN,IACE,CAAC,IAAI,CAAC,SAAS,KAAK,yBAAa,CAAC,cAAc;YAC9C,IAAI,CAAC,SAAS,KAAK,yBAAa,CAAC,UAAU;YAC3C,IAAI,CAAC,SAAS,KAAK,yBAAa,CAAC,aAAa;YAC9C,IAAI,CAAC,SAAS,KAAK,yBAAa,CAAC,kBAAkB;YACnD,IAAI,CAAC,SAAS,KAAK,yBAAa,CAAC,oBAAoB,CAAC;YACxD,CAAC,IAAI,CAAC,QAAQ,EACd;YACA,MAAM,IAAI,oCAA4B,CAAC,oCAAoC,IAAI,CAAC,SAAS,GAAG,CAAC,CAAC;SAC/F;QAED,IAAI,IAAI,CAAC,SAAS,KAAK,yBAAa,CAAC,YAAY,EAAE;YACjD,IAAI,IAAI,CAAC,QAAQ,EAAE;gBACjB,MAAM,IAAI,iCAAyB,CACjC,yCAAyC,IAAI,CAAC,SAAS,gCAAgC,CACxF,CAAC;aACH;YAED,IAAI,IAAI,CAAC,mBAAmB,CAAC,cAAc,IAAI,IAAI,CAAC,mBAAmB,CAAC,WAAW,EAAE;gBACnF,MAAM,IAAI,iCAAyB,CACjC,0EAA0E,IAAI,CAAC,SAAS,IAAI,CAC7F,CAAC;aACH;YAED,IAAI,IAAI,CAAC,mBAAmB,CAAC,WAAW,IAAI,IAAI,CAAC,mBAAmB,CAAC,WAAW,KAAK,KAAK,EAAE;gBAC1F,MAAM,IAAI,iCAAyB,CACjC,qEAAqE,IAAI,CAAC,SAAS,IAAI,CACxF,CAAC;aACH;YAED,IACE,IAAI,CAAC,mBAAmB,CAAC,sBAAsB;gBAC/C,CAAC,IAAI,CAAC,mBAAmB,CAAC,sBAAsB,EAChD;gBACA,MAAM,IAAI,iCAAyB,CACjC,gGAAgG,IAAI,CAAC,SAAS,GAAG,CAClH,CAAC;aACH;YAED,IACE,CAAC,IAAI,CAAC,mBAAmB,CAAC,WAAW;gBACrC,CAAC,IAAI,CAAC,mBAAmB,CAAC,sBAAsB,EAChD;gBACA,MAAM,IAAI,iCAAyB,CACjC,qFAAqF,IAAI,CAAC,SAAS,IAAI,CACxG,CAAC;aACH;SACF;QAED,IAAI,wCAA4B,CAAC,GAAG,CAAC,IAAI,CAAC,SAAS,CAAC,EAAE;YACpD,IAAI,IAAI,CAAC,MAAM,IAAI,IAAI,IAAI,IAAI,CAAC,MAAM,KAAK,WAAW,EAAE;gBACtD,gEAAgE;gBAChE,MAAM,IAAI,qBAAa,CACrB,mBAAmB,IAAI,CAAC,MAAM,oBAAoB,IAAI,CAAC,SAAS,cAAc,CAC/E,CAAC;aACH;SACF;QAED,IAAI,IAAI,CAAC,SAAS,KAAK,yBAAa,CAAC,aAAa,IAAI,IAAI,CAAC,MAAM,IAAI,IAAI,EAAE;YACzE,gEAAgE;YAChE,MAAM,IAAI,qBAAa,CAAC,qDAAqD,CAAC,CAAC;SAChF;QAED,IAAI,IAAI,CAAC,SAAS,KAAK,yBAAa,CAAC,YAAY,IAAI,IAAI,CAAC,QAAQ,IAAI,IAAI,EAAE;YAC1E,IAAI,IAAI,CAAC,QAAQ,KAAK,EAAE,EAAE;gBACxB,OAAO,CAAC,GAAG,CAAC,IAAI,EAAE,UAAU,EAAE,SAAS,CAAC,CAAC;gBACzC,OAAO;aACR;YACD,gEAAgE;YAChE,MAAM,IAAI,qBAAa,CAAC,iDAAiD,CAAC,CAAC;SAC5E;QAED,MAAM,gBAAgB,GAAG,IAAI,CAAC,mBAAmB,CAAC,sBAAsB,IAAI,KAAK,CAAC;QAClF,IAAI,CAAC,MAAM,CAAC,MAAM,CAAC,oCAA2B,CAAC,CAAC,QAAQ,CAAC,gBAAgB,CAAC,EAAE;YAC1E,MAAM,IAAI,qBAAa,CAAC,yCAAyC,gBAAgB,EAAE,CAAC,CAAC;SACtF;IACH,CAAC;IAED,MAAM,CAAC,KAAK,CACV,KAAmC,EACnC,OAAyC;QAEzC,OAAO,IAAI,gBAAgB,CAAC;YAC1B,QAAQ,EAAE,OAAO,CAAC,QAAQ,IAAI,KAAK,EAAE,QAAQ,IAAI,EAAE;YACnD,QAAQ,EAAE,OAAO,CAAC,QAAQ,IAAI,KAAK,EAAE,QAAQ,IAAI,EAAE;YACnD,SAAS,EAAE,OAAO,CAAC,SAAS,IAAI,KAAK,EAAE,SAAS,IAAI,yBAAa,CAAC,eAAe;YACjF,mBAAmB,EAAE,OAAO,CAAC,mBAAmB,IAAI,KAAK,EAAE,mBAAmB,IAAI,EAAE;YACpF,MAAM,EAAE,OAAO,CAAC,MAAM,IAAI,OAAO,CAAC,EAAE,IAAI,KAAK,EAAE,MAAM,IAAI,OAAO;SACjE,CAAC,CAAC;IACL,CAAC;CACF;AAxKD,4CAwKC"}

44
node_modules/mongodb/lib/cmap/auth/mongocr.js generated vendored Normal file
View file

@ -0,0 +1,44 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.MongoCR = void 0;
const crypto = require("crypto");
const error_1 = require("../../error");
const utils_1 = require("../../utils");
const auth_provider_1 = require("./auth_provider");
class MongoCR extends auth_provider_1.AuthProvider {
auth(authContext, callback) {
const { connection, credentials } = authContext;
if (!credentials) {
return callback(new error_1.MongoMissingCredentialsError('AuthContext must provide credentials.'));
}
const username = credentials.username;
const password = credentials.password;
const source = credentials.source;
connection.command((0, utils_1.ns)(`${source}.$cmd`), { getnonce: 1 }, undefined, (err, r) => {
let nonce = null;
let key = null;
// Get nonce
if (err == null) {
nonce = r.nonce;
// Use node md5 generator
let md5 = crypto.createHash('md5');
// Generate keys used for authentication
md5.update(`${username}:mongo:${password}`, 'utf8');
const hash_password = md5.digest('hex');
// Final key
md5 = crypto.createHash('md5');
md5.update(nonce + username + hash_password, 'utf8');
key = md5.digest('hex');
}
const authenticateCommand = {
authenticate: 1,
user: username,
nonce,
key
};
connection.command((0, utils_1.ns)(`${source}.$cmd`), authenticateCommand, undefined, callback);
});
}
}
exports.MongoCR = MongoCR;
//# sourceMappingURL=mongocr.js.map

1
node_modules/mongodb/lib/cmap/auth/mongocr.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"mongocr.js","sourceRoot":"","sources":["../../../src/cmap/auth/mongocr.ts"],"names":[],"mappings":";;;AAAA,iCAAiC;AAEjC,uCAA2D;AAC3D,uCAA2C;AAC3C,mDAA4D;AAE5D,MAAa,OAAQ,SAAQ,4BAAY;IAC9B,IAAI,CAAC,WAAwB,EAAE,QAAkB;QACxD,MAAM,EAAE,UAAU,EAAE,WAAW,EAAE,GAAG,WAAW,CAAC;QAChD,IAAI,CAAC,WAAW,EAAE;YAChB,OAAO,QAAQ,CAAC,IAAI,oCAA4B,CAAC,uCAAuC,CAAC,CAAC,CAAC;SAC5F;QACD,MAAM,QAAQ,GAAG,WAAW,CAAC,QAAQ,CAAC;QACtC,MAAM,QAAQ,GAAG,WAAW,CAAC,QAAQ,CAAC;QACtC,MAAM,MAAM,GAAG,WAAW,CAAC,MAAM,CAAC;QAClC,UAAU,CAAC,OAAO,CAAC,IAAA,UAAE,EAAC,GAAG,MAAM,OAAO,CAAC,EAAE,EAAE,QAAQ,EAAE,CAAC,EAAE,EAAE,SAAS,EAAE,CAAC,GAAG,EAAE,CAAC,EAAE,EAAE;YAC9E,IAAI,KAAK,GAAG,IAAI,CAAC;YACjB,IAAI,GAAG,GAAG,IAAI,CAAC;YAEf,YAAY;YACZ,IAAI,GAAG,IAAI,IAAI,EAAE;gBACf,KAAK,GAAG,CAAC,CAAC,KAAK,CAAC;gBAEhB,yBAAyB;gBACzB,IAAI,GAAG,GAAG,MAAM,CAAC,UAAU,CAAC,KAAK,CAAC,CAAC;gBAEnC,wCAAwC;gBACxC,GAAG,CAAC,MAAM,CAAC,GAAG,QAAQ,UAAU,QAAQ,EAAE,EAAE,MAAM,CAAC,CAAC;gBACpD,MAAM,aAAa,GAAG,GAAG,CAAC,MAAM,CAAC,KAAK,CAAC,CAAC;gBAExC,YAAY;gBACZ,GAAG,GAAG,MAAM,CAAC,UAAU,CAAC,KAAK,CAAC,CAAC;gBAC/B,GAAG,CAAC,MAAM,CAAC,KAAK,GAAG,QAAQ,GAAG,aAAa,EAAE,MAAM,CAAC,CAAC;gBACrD,GAAG,GAAG,GAAG,CAAC,MAAM,CAAC,KAAK,CAAC,CAAC;aACzB;YAED,MAAM,mBAAmB,GAAG;gBAC1B,YAAY,EAAE,CAAC;gBACf,IAAI,EAAE,QAAQ;gBACd,KAAK;gBACL,GAAG;aACJ,CAAC;YAEF,UAAU,CAAC,OAAO,CAAC,IAAA,UAAE,EAAC,GAAG,MAAM,OAAO,CAAC,EAAE,mBAAmB,EAAE,SAAS,EAAE,QAAQ,CAAC,CAAC;QACrF,CAAC,CAAC,CAAC;IACL,CAAC;CACF;AAxCD,0BAwCC"}

238
node_modules/mongodb/lib/cmap/auth/mongodb_aws.js generated vendored Normal file
View file

@ -0,0 +1,238 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.MongoDBAWS = void 0;
const crypto = require("crypto");
const http = require("http");
const url = require("url");
const BSON = require("../../bson");
const deps_1 = require("../../deps");
const error_1 = require("../../error");
const utils_1 = require("../../utils");
const auth_provider_1 = require("./auth_provider");
const mongo_credentials_1 = require("./mongo_credentials");
const providers_1 = require("./providers");
const ASCII_N = 110;
const AWS_RELATIVE_URI = 'http://169.254.170.2';
const AWS_EC2_URI = 'http://169.254.169.254';
const AWS_EC2_PATH = '/latest/meta-data/iam/security-credentials';
const bsonOptions = {
useBigInt64: false,
promoteLongs: true,
promoteValues: true,
promoteBuffers: false,
bsonRegExp: false
};
class MongoDBAWS extends auth_provider_1.AuthProvider {
auth(authContext, callback) {
const { connection, credentials } = authContext;
if (!credentials) {
return callback(new error_1.MongoMissingCredentialsError('AuthContext must provide credentials.'));
}
if ('kModuleError' in deps_1.aws4) {
return callback(deps_1.aws4['kModuleError']);
}
const { sign } = deps_1.aws4;
if ((0, utils_1.maxWireVersion)(connection) < 9) {
callback(new error_1.MongoCompatibilityError('MONGODB-AWS authentication requires MongoDB version 4.4 or later'));
return;
}
if (!credentials.username) {
makeTempCredentials(credentials, (err, tempCredentials) => {
if (err || !tempCredentials)
return callback(err);
authContext.credentials = tempCredentials;
this.auth(authContext, callback);
});
return;
}
const accessKeyId = credentials.username;
const secretAccessKey = credentials.password;
const sessionToken = credentials.mechanismProperties.AWS_SESSION_TOKEN;
// If all three defined, include sessionToken, else include username and pass, else no credentials
const awsCredentials = accessKeyId && secretAccessKey && sessionToken
? { accessKeyId, secretAccessKey, sessionToken }
: accessKeyId && secretAccessKey
? { accessKeyId, secretAccessKey }
: undefined;
const db = credentials.source;
crypto.randomBytes(32, (err, nonce) => {
if (err) {
callback(err);
return;
}
const saslStart = {
saslStart: 1,
mechanism: 'MONGODB-AWS',
payload: BSON.serialize({ r: nonce, p: ASCII_N }, bsonOptions)
};
connection.command((0, utils_1.ns)(`${db}.$cmd`), saslStart, undefined, (err, res) => {
if (err)
return callback(err);
const serverResponse = BSON.deserialize(res.payload.buffer, bsonOptions);
const host = serverResponse.h;
const serverNonce = serverResponse.s.buffer;
if (serverNonce.length !== 64) {
callback(
// TODO(NODE-3483)
new error_1.MongoRuntimeError(`Invalid server nonce length ${serverNonce.length}, expected 64`));
return;
}
if (!utils_1.ByteUtils.equals(serverNonce.subarray(0, nonce.byteLength), nonce)) {
// throw because the serverNonce's leading 32 bytes must equal the client nonce's 32 bytes
// https://github.com/mongodb/specifications/blob/875446db44aade414011731840831f38a6c668df/source/auth/auth.rst#id11
// TODO(NODE-3483)
callback(new error_1.MongoRuntimeError('Server nonce does not begin with client nonce'));
return;
}
if (host.length < 1 || host.length > 255 || host.indexOf('..') !== -1) {
// TODO(NODE-3483)
callback(new error_1.MongoRuntimeError(`Server returned an invalid host: "${host}"`));
return;
}
const body = 'Action=GetCallerIdentity&Version=2011-06-15';
const options = sign({
method: 'POST',
host,
region: deriveRegion(serverResponse.h),
service: 'sts',
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
'Content-Length': body.length,
'X-MongoDB-Server-Nonce': utils_1.ByteUtils.toBase64(serverNonce),
'X-MongoDB-GS2-CB-Flag': 'n'
},
path: '/',
body
}, awsCredentials);
const payload = {
a: options.headers.Authorization,
d: options.headers['X-Amz-Date']
};
if (sessionToken) {
payload.t = sessionToken;
}
const saslContinue = {
saslContinue: 1,
conversationId: 1,
payload: BSON.serialize(payload, bsonOptions)
};
connection.command((0, utils_1.ns)(`${db}.$cmd`), saslContinue, undefined, callback);
});
});
}
}
exports.MongoDBAWS = MongoDBAWS;
function makeTempCredentials(credentials, callback) {
function done(creds) {
if (!creds.AccessKeyId || !creds.SecretAccessKey || !creds.Token) {
callback(new error_1.MongoMissingCredentialsError('Could not obtain temporary MONGODB-AWS credentials'));
return;
}
callback(undefined, new mongo_credentials_1.MongoCredentials({
username: creds.AccessKeyId,
password: creds.SecretAccessKey,
source: credentials.source,
mechanism: providers_1.AuthMechanism.MONGODB_AWS,
mechanismProperties: {
AWS_SESSION_TOKEN: creds.Token
}
}));
}
const credentialProvider = (0, deps_1.getAwsCredentialProvider)();
// Check if the AWS credential provider from the SDK is present. If not,
// use the old method.
if ('kModuleError' in credentialProvider) {
// If the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
// is set then drivers MUST assume that it was set by an AWS ECS agent
if (process.env.AWS_CONTAINER_CREDENTIALS_RELATIVE_URI) {
request(`${AWS_RELATIVE_URI}${process.env.AWS_CONTAINER_CREDENTIALS_RELATIVE_URI}`, undefined, (err, res) => {
if (err)
return callback(err);
done(res);
});
return;
}
// Otherwise assume we are on an EC2 instance
// get a token
request(`${AWS_EC2_URI}/latest/api/token`, { method: 'PUT', json: false, headers: { 'X-aws-ec2-metadata-token-ttl-seconds': 30 } }, (err, token) => {
if (err)
return callback(err);
// get role name
request(`${AWS_EC2_URI}/${AWS_EC2_PATH}`, { json: false, headers: { 'X-aws-ec2-metadata-token': token } }, (err, roleName) => {
if (err)
return callback(err);
// get temp credentials
request(`${AWS_EC2_URI}/${AWS_EC2_PATH}/${roleName}`, { headers: { 'X-aws-ec2-metadata-token': token } }, (err, creds) => {
if (err)
return callback(err);
done(creds);
});
});
});
}
else {
/*
* Creates a credential provider that will attempt to find credentials from the
* following sources (listed in order of precedence):
*
* - Environment variables exposed via process.env
* - SSO credentials from token cache
* - Web identity token credentials
* - Shared credentials and config ini files
* - The EC2/ECS Instance Metadata Service
*/
const { fromNodeProviderChain } = credentialProvider;
const provider = fromNodeProviderChain();
provider()
.then((creds) => {
done({
AccessKeyId: creds.accessKeyId,
SecretAccessKey: creds.secretAccessKey,
Token: creds.sessionToken,
Expiration: creds.expiration
});
})
.catch((error) => {
callback(new error_1.MongoAWSError(error.message));
});
}
}
function deriveRegion(host) {
const parts = host.split('.');
if (parts.length === 1 || parts[1] === 'amazonaws') {
return 'us-east-1';
}
return parts[1];
}
function request(uri, _options, callback) {
const options = Object.assign({
method: 'GET',
timeout: 10000,
json: true
}, url.parse(uri), _options);
const req = http.request(options, res => {
res.setEncoding('utf8');
let data = '';
res.on('data', d => (data += d));
res.on('end', () => {
if (options.json === false) {
callback(undefined, data);
return;
}
try {
const parsed = JSON.parse(data);
callback(undefined, parsed);
}
catch (err) {
// TODO(NODE-3483)
callback(new error_1.MongoRuntimeError(`Invalid JSON response: "${data}"`));
}
});
});
req.on('timeout', () => {
req.destroy(new error_1.MongoAWSError(`AWS request to ${uri} timed out after ${options.timeout} ms`));
});
req.on('error', err => callback(err));
req.end();
}
//# sourceMappingURL=mongodb_aws.js.map

File diff suppressed because one or more lines are too long

3
node_modules/mongodb/lib/cmap/auth/mongodb_oidc.js generated vendored Normal file
View file

@ -0,0 +1,3 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
//# sourceMappingURL=mongodb_oidc.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"mongodb_oidc.js","sourceRoot":"","sources":["../../../src/cmap/auth/mongodb_oidc.ts"],"names":[],"mappings":""}

27
node_modules/mongodb/lib/cmap/auth/plain.js generated vendored Normal file
View file

@ -0,0 +1,27 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.Plain = void 0;
const bson_1 = require("../../bson");
const error_1 = require("../../error");
const utils_1 = require("../../utils");
const auth_provider_1 = require("./auth_provider");
class Plain extends auth_provider_1.AuthProvider {
auth(authContext, callback) {
const { connection, credentials } = authContext;
if (!credentials) {
return callback(new error_1.MongoMissingCredentialsError('AuthContext must provide credentials.'));
}
const username = credentials.username;
const password = credentials.password;
const payload = new bson_1.Binary(Buffer.from(`\x00${username}\x00${password}`));
const command = {
saslStart: 1,
mechanism: 'PLAIN',
payload: payload,
autoAuthorize: 1
};
connection.command((0, utils_1.ns)('$external.$cmd'), command, undefined, callback);
}
}
exports.Plain = Plain;
//# sourceMappingURL=plain.js.map

1
node_modules/mongodb/lib/cmap/auth/plain.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"plain.js","sourceRoot":"","sources":["../../../src/cmap/auth/plain.ts"],"names":[],"mappings":";;;AAAA,qCAAoC;AACpC,uCAA2D;AAC3D,uCAA2C;AAC3C,mDAA4D;AAE5D,MAAa,KAAM,SAAQ,4BAAY;IAC5B,IAAI,CAAC,WAAwB,EAAE,QAAkB;QACxD,MAAM,EAAE,UAAU,EAAE,WAAW,EAAE,GAAG,WAAW,CAAC;QAChD,IAAI,CAAC,WAAW,EAAE;YAChB,OAAO,QAAQ,CAAC,IAAI,oCAA4B,CAAC,uCAAuC,CAAC,CAAC,CAAC;SAC5F;QACD,MAAM,QAAQ,GAAG,WAAW,CAAC,QAAQ,CAAC;QACtC,MAAM,QAAQ,GAAG,WAAW,CAAC,QAAQ,CAAC;QAEtC,MAAM,OAAO,GAAG,IAAI,aAAM,CAAC,MAAM,CAAC,IAAI,CAAC,OAAO,QAAQ,OAAO,QAAQ,EAAE,CAAC,CAAC,CAAC;QAC1E,MAAM,OAAO,GAAG;YACd,SAAS,EAAE,CAAC;YACZ,SAAS,EAAE,OAAO;YAClB,OAAO,EAAE,OAAO;YAChB,aAAa,EAAE,CAAC;SACjB,CAAC;QAEF,UAAU,CAAC,OAAO,CAAC,IAAA,UAAE,EAAC,gBAAgB,CAAC,EAAE,OAAO,EAAE,SAAS,EAAE,QAAQ,CAAC,CAAC;IACzE,CAAC;CACF;AAnBD,sBAmBC"}

24
node_modules/mongodb/lib/cmap/auth/providers.js generated vendored Normal file
View file

@ -0,0 +1,24 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.AUTH_MECHS_AUTH_SRC_EXTERNAL = exports.AuthMechanism = void 0;
/** @public */
exports.AuthMechanism = Object.freeze({
MONGODB_AWS: 'MONGODB-AWS',
MONGODB_CR: 'MONGODB-CR',
MONGODB_DEFAULT: 'DEFAULT',
MONGODB_GSSAPI: 'GSSAPI',
MONGODB_PLAIN: 'PLAIN',
MONGODB_SCRAM_SHA1: 'SCRAM-SHA-1',
MONGODB_SCRAM_SHA256: 'SCRAM-SHA-256',
MONGODB_X509: 'MONGODB-X509',
/** @internal TODO: NODE-5035: Make mechanism public. */
MONGODB_OIDC: 'MONGODB-OIDC'
});
/** @internal */
exports.AUTH_MECHS_AUTH_SRC_EXTERNAL = new Set([
exports.AuthMechanism.MONGODB_GSSAPI,
exports.AuthMechanism.MONGODB_AWS,
exports.AuthMechanism.MONGODB_OIDC,
exports.AuthMechanism.MONGODB_X509
]);
//# sourceMappingURL=providers.js.map

1
node_modules/mongodb/lib/cmap/auth/providers.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"providers.js","sourceRoot":"","sources":["../../../src/cmap/auth/providers.ts"],"names":[],"mappings":";;;AAAA,cAAc;AACD,QAAA,aAAa,GAAG,MAAM,CAAC,MAAM,CAAC;IACzC,WAAW,EAAE,aAAa;IAC1B,UAAU,EAAE,YAAY;IACxB,eAAe,EAAE,SAAS;IAC1B,cAAc,EAAE,QAAQ;IACxB,aAAa,EAAE,OAAO;IACtB,kBAAkB,EAAE,aAAa;IACjC,oBAAoB,EAAE,eAAe;IACrC,YAAY,EAAE,cAAc;IAC5B,wDAAwD;IACxD,YAAY,EAAE,cAAc;CACpB,CAAC,CAAC;AAKZ,gBAAgB;AACH,QAAA,4BAA4B,GAAG,IAAI,GAAG,CAAgB;IACjE,qBAAa,CAAC,cAAc;IAC5B,qBAAa,CAAC,WAAW;IACzB,qBAAa,CAAC,YAAY;IAC1B,qBAAa,CAAC,YAAY;CAC3B,CAAC,CAAC"}

288
node_modules/mongodb/lib/cmap/auth/scram.js generated vendored Normal file
View file

@ -0,0 +1,288 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.ScramSHA256 = exports.ScramSHA1 = void 0;
const crypto = require("crypto");
const bson_1 = require("../../bson");
const deps_1 = require("../../deps");
const error_1 = require("../../error");
const utils_1 = require("../../utils");
const auth_provider_1 = require("./auth_provider");
const providers_1 = require("./providers");
class ScramSHA extends auth_provider_1.AuthProvider {
constructor(cryptoMethod) {
super();
this.cryptoMethod = cryptoMethod || 'sha1';
}
prepare(handshakeDoc, authContext, callback) {
const cryptoMethod = this.cryptoMethod;
const credentials = authContext.credentials;
if (!credentials) {
return callback(new error_1.MongoMissingCredentialsError('AuthContext must provide credentials.'));
}
if (cryptoMethod === 'sha256' && deps_1.saslprep == null) {
(0, utils_1.emitWarning)('Warning: no saslprep library specified. Passwords will not be sanitized');
}
crypto.randomBytes(24, (err, nonce) => {
if (err) {
return callback(err);
}
// store the nonce for later use
Object.assign(authContext, { nonce });
const request = Object.assign({}, handshakeDoc, {
speculativeAuthenticate: Object.assign(makeFirstMessage(cryptoMethod, credentials, nonce), {
db: credentials.source
})
});
callback(undefined, request);
});
}
auth(authContext, callback) {
const response = authContext.response;
if (response && response.speculativeAuthenticate) {
continueScramConversation(this.cryptoMethod, response.speculativeAuthenticate, authContext, callback);
return;
}
executeScram(this.cryptoMethod, authContext, callback);
}
}
function cleanUsername(username) {
return username.replace('=', '=3D').replace(',', '=2C');
}
function clientFirstMessageBare(username, nonce) {
// NOTE: This is done b/c Javascript uses UTF-16, but the server is hashing in UTF-8.
// Since the username is not sasl-prep-d, we need to do this here.
return Buffer.concat([
Buffer.from('n=', 'utf8'),
Buffer.from(username, 'utf8'),
Buffer.from(',r=', 'utf8'),
Buffer.from(nonce.toString('base64'), 'utf8')
]);
}
function makeFirstMessage(cryptoMethod, credentials, nonce) {
const username = cleanUsername(credentials.username);
const mechanism = cryptoMethod === 'sha1' ? providers_1.AuthMechanism.MONGODB_SCRAM_SHA1 : providers_1.AuthMechanism.MONGODB_SCRAM_SHA256;
// NOTE: This is done b/c Javascript uses UTF-16, but the server is hashing in UTF-8.
// Since the username is not sasl-prep-d, we need to do this here.
return {
saslStart: 1,
mechanism,
payload: new bson_1.Binary(Buffer.concat([Buffer.from('n,,', 'utf8'), clientFirstMessageBare(username, nonce)])),
autoAuthorize: 1,
options: { skipEmptyExchange: true }
};
}
function executeScram(cryptoMethod, authContext, callback) {
const { connection, credentials } = authContext;
if (!credentials) {
return callback(new error_1.MongoMissingCredentialsError('AuthContext must provide credentials.'));
}
if (!authContext.nonce) {
return callback(new error_1.MongoInvalidArgumentError('AuthContext must contain a valid nonce property'));
}
const nonce = authContext.nonce;
const db = credentials.source;
const saslStartCmd = makeFirstMessage(cryptoMethod, credentials, nonce);
connection.command((0, utils_1.ns)(`${db}.$cmd`), saslStartCmd, undefined, (_err, result) => {
const err = resolveError(_err, result);
if (err) {
return callback(err);
}
continueScramConversation(cryptoMethod, result, authContext, callback);
});
}
function continueScramConversation(cryptoMethod, response, authContext, callback) {
const connection = authContext.connection;
const credentials = authContext.credentials;
if (!credentials) {
return callback(new error_1.MongoMissingCredentialsError('AuthContext must provide credentials.'));
}
if (!authContext.nonce) {
return callback(new error_1.MongoInvalidArgumentError('Unable to continue SCRAM without valid nonce'));
}
const nonce = authContext.nonce;
const db = credentials.source;
const username = cleanUsername(credentials.username);
const password = credentials.password;
let processedPassword;
if (cryptoMethod === 'sha256') {
processedPassword = 'kModuleError' in deps_1.saslprep ? password : (0, deps_1.saslprep)(password);
}
else {
try {
processedPassword = passwordDigest(username, password);
}
catch (e) {
return callback(e);
}
}
const payload = Buffer.isBuffer(response.payload)
? new bson_1.Binary(response.payload)
: response.payload;
const dict = parsePayload(payload.value());
const iterations = parseInt(dict.i, 10);
if (iterations && iterations < 4096) {
callback(
// TODO(NODE-3483)
new error_1.MongoRuntimeError(`Server returned an invalid iteration count ${iterations}`), false);
return;
}
const salt = dict.s;
const rnonce = dict.r;
if (rnonce.startsWith('nonce')) {
// TODO(NODE-3483)
callback(new error_1.MongoRuntimeError(`Server returned an invalid nonce: ${rnonce}`), false);
return;
}
// Set up start of proof
const withoutProof = `c=biws,r=${rnonce}`;
const saltedPassword = HI(processedPassword, Buffer.from(salt, 'base64'), iterations, cryptoMethod);
const clientKey = HMAC(cryptoMethod, saltedPassword, 'Client Key');
const serverKey = HMAC(cryptoMethod, saltedPassword, 'Server Key');
const storedKey = H(cryptoMethod, clientKey);
const authMessage = [clientFirstMessageBare(username, nonce), payload.value(), withoutProof].join(',');
const clientSignature = HMAC(cryptoMethod, storedKey, authMessage);
const clientProof = `p=${xor(clientKey, clientSignature)}`;
const clientFinal = [withoutProof, clientProof].join(',');
const serverSignature = HMAC(cryptoMethod, serverKey, authMessage);
const saslContinueCmd = {
saslContinue: 1,
conversationId: response.conversationId,
payload: new bson_1.Binary(Buffer.from(clientFinal))
};
connection.command((0, utils_1.ns)(`${db}.$cmd`), saslContinueCmd, undefined, (_err, r) => {
const err = resolveError(_err, r);
if (err) {
return callback(err);
}
const parsedResponse = parsePayload(r.payload.value());
if (!compareDigest(Buffer.from(parsedResponse.v, 'base64'), serverSignature)) {
callback(new error_1.MongoRuntimeError('Server returned an invalid signature'));
return;
}
if (!r || r.done !== false) {
return callback(err, r);
}
const retrySaslContinueCmd = {
saslContinue: 1,
conversationId: r.conversationId,
payload: Buffer.alloc(0)
};
connection.command((0, utils_1.ns)(`${db}.$cmd`), retrySaslContinueCmd, undefined, callback);
});
}
function parsePayload(payload) {
const dict = {};
const parts = payload.split(',');
for (let i = 0; i < parts.length; i++) {
const valueParts = parts[i].split('=');
dict[valueParts[0]] = valueParts[1];
}
return dict;
}
function passwordDigest(username, password) {
if (typeof username !== 'string') {
throw new error_1.MongoInvalidArgumentError('Username must be a string');
}
if (typeof password !== 'string') {
throw new error_1.MongoInvalidArgumentError('Password must be a string');
}
if (password.length === 0) {
throw new error_1.MongoInvalidArgumentError('Password cannot be empty');
}
let md5;
try {
md5 = crypto.createHash('md5');
}
catch (err) {
if (crypto.getFips()) {
// This error is (slightly) more helpful than what comes from OpenSSL directly, e.g.
// 'Error: error:060800C8:digital envelope routines:EVP_DigestInit_ex:disabled for FIPS'
throw new Error('Auth mechanism SCRAM-SHA-1 is not supported in FIPS mode');
}
throw err;
}
md5.update(`${username}:mongo:${password}`, 'utf8');
return md5.digest('hex');
}
// XOR two buffers
function xor(a, b) {
if (!Buffer.isBuffer(a)) {
a = Buffer.from(a);
}
if (!Buffer.isBuffer(b)) {
b = Buffer.from(b);
}
const length = Math.max(a.length, b.length);
const res = [];
for (let i = 0; i < length; i += 1) {
res.push(a[i] ^ b[i]);
}
return Buffer.from(res).toString('base64');
}
function H(method, text) {
return crypto.createHash(method).update(text).digest();
}
function HMAC(method, key, text) {
return crypto.createHmac(method, key).update(text).digest();
}
let _hiCache = {};
let _hiCacheCount = 0;
function _hiCachePurge() {
_hiCache = {};
_hiCacheCount = 0;
}
const hiLengthMap = {
sha256: 32,
sha1: 20
};
function HI(data, salt, iterations, cryptoMethod) {
// omit the work if already generated
const key = [data, salt.toString('base64'), iterations].join('_');
if (_hiCache[key] != null) {
return _hiCache[key];
}
// generate the salt
const saltedData = crypto.pbkdf2Sync(data, salt, iterations, hiLengthMap[cryptoMethod], cryptoMethod);
// cache a copy to speed up the next lookup, but prevent unbounded cache growth
if (_hiCacheCount >= 200) {
_hiCachePurge();
}
_hiCache[key] = saltedData;
_hiCacheCount += 1;
return saltedData;
}
function compareDigest(lhs, rhs) {
if (lhs.length !== rhs.length) {
return false;
}
if (typeof crypto.timingSafeEqual === 'function') {
return crypto.timingSafeEqual(lhs, rhs);
}
let result = 0;
for (let i = 0; i < lhs.length; i++) {
result |= lhs[i] ^ rhs[i];
}
return result === 0;
}
function resolveError(err, result) {
if (err)
return err;
if (result) {
if (result.$err || result.errmsg)
return new error_1.MongoServerError(result);
}
return;
}
class ScramSHA1 extends ScramSHA {
constructor() {
super('sha1');
}
}
exports.ScramSHA1 = ScramSHA1;
class ScramSHA256 extends ScramSHA {
constructor() {
super('sha256');
}
}
exports.ScramSHA256 = ScramSHA256;
//# sourceMappingURL=scram.js.map

1
node_modules/mongodb/lib/cmap/auth/scram.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

39
node_modules/mongodb/lib/cmap/auth/x509.js generated vendored Normal file
View file

@ -0,0 +1,39 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.X509 = void 0;
const error_1 = require("../../error");
const utils_1 = require("../../utils");
const auth_provider_1 = require("./auth_provider");
class X509 extends auth_provider_1.AuthProvider {
prepare(handshakeDoc, authContext, callback) {
const { credentials } = authContext;
if (!credentials) {
return callback(new error_1.MongoMissingCredentialsError('AuthContext must provide credentials.'));
}
Object.assign(handshakeDoc, {
speculativeAuthenticate: x509AuthenticateCommand(credentials)
});
callback(undefined, handshakeDoc);
}
auth(authContext, callback) {
const connection = authContext.connection;
const credentials = authContext.credentials;
if (!credentials) {
return callback(new error_1.MongoMissingCredentialsError('AuthContext must provide credentials.'));
}
const response = authContext.response;
if (response && response.speculativeAuthenticate) {
return callback();
}
connection.command((0, utils_1.ns)('$external.$cmd'), x509AuthenticateCommand(credentials), undefined, callback);
}
}
exports.X509 = X509;
function x509AuthenticateCommand(credentials) {
const command = { authenticate: 1, mechanism: 'MONGODB-X509' };
if (credentials.username) {
command.user = credentials.username;
}
return command;
}
//# sourceMappingURL=x509.js.map

1
node_modules/mongodb/lib/cmap/auth/x509.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"x509.js","sourceRoot":"","sources":["../../../src/cmap/auth/x509.ts"],"names":[],"mappings":";;;AACA,uCAA2D;AAC3D,uCAA2C;AAE3C,mDAA4D;AAG5D,MAAa,IAAK,SAAQ,4BAAY;IAC3B,OAAO,CACd,YAA+B,EAC/B,WAAwB,EACxB,QAAkB;QAElB,MAAM,EAAE,WAAW,EAAE,GAAG,WAAW,CAAC;QACpC,IAAI,CAAC,WAAW,EAAE;YAChB,OAAO,QAAQ,CAAC,IAAI,oCAA4B,CAAC,uCAAuC,CAAC,CAAC,CAAC;SAC5F;QACD,MAAM,CAAC,MAAM,CAAC,YAAY,EAAE;YAC1B,uBAAuB,EAAE,uBAAuB,CAAC,WAAW,CAAC;SAC9D,CAAC,CAAC;QAEH,QAAQ,CAAC,SAAS,EAAE,YAAY,CAAC,CAAC;IACpC,CAAC;IAEQ,IAAI,CAAC,WAAwB,EAAE,QAAkB;QACxD,MAAM,UAAU,GAAG,WAAW,CAAC,UAAU,CAAC;QAC1C,MAAM,WAAW,GAAG,WAAW,CAAC,WAAW,CAAC;QAC5C,IAAI,CAAC,WAAW,EAAE;YAChB,OAAO,QAAQ,CAAC,IAAI,oCAA4B,CAAC,uCAAuC,CAAC,CAAC,CAAC;SAC5F;QACD,MAAM,QAAQ,GAAG,WAAW,CAAC,QAAQ,CAAC;QAEtC,IAAI,QAAQ,IAAI,QAAQ,CAAC,uBAAuB,EAAE;YAChD,OAAO,QAAQ,EAAE,CAAC;SACnB;QAED,UAAU,CAAC,OAAO,CAChB,IAAA,UAAE,EAAC,gBAAgB,CAAC,EACpB,uBAAuB,CAAC,WAAW,CAAC,EACpC,SAAS,EACT,QAAQ,CACT,CAAC;IACJ,CAAC;CACF;AApCD,oBAoCC;AAED,SAAS,uBAAuB,CAAC,WAA6B;IAC5D,MAAM,OAAO,GAAa,EAAE,YAAY,EAAE,CAAC,EAAE,SAAS,EAAE,cAAc,EAAE,CAAC;IACzE,IAAI,WAAW,CAAC,QAAQ,EAAE;QACxB,OAAO,CAAC,IAAI,GAAG,WAAW,CAAC,QAAQ,CAAC;KACrC;IAED,OAAO,OAAO,CAAC;AACjB,CAAC"}

View file

@ -0,0 +1,242 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.CommandFailedEvent = exports.CommandSucceededEvent = exports.CommandStartedEvent = void 0;
const constants_1 = require("../constants");
const utils_1 = require("../utils");
const commands_1 = require("./commands");
/**
* An event indicating the start of a given
* @public
* @category Event
*/
class CommandStartedEvent {
/**
* Create a started event
*
* @internal
* @param pool - the pool that originated the command
* @param command - the command
*/
constructor(connection, command) {
const cmd = extractCommand(command);
const commandName = extractCommandName(cmd);
const { address, connectionId, serviceId } = extractConnectionDetails(connection);
// TODO: remove in major revision, this is not spec behavior
if (SENSITIVE_COMMANDS.has(commandName)) {
this.commandObj = {};
this.commandObj[commandName] = true;
}
this.address = address;
this.connectionId = connectionId;
this.serviceId = serviceId;
this.requestId = command.requestId;
this.databaseName = databaseName(command);
this.commandName = commandName;
this.command = maybeRedact(commandName, cmd, cmd);
}
/* @internal */
get hasServiceId() {
return !!this.serviceId;
}
}
exports.CommandStartedEvent = CommandStartedEvent;
/**
* An event indicating the success of a given command
* @public
* @category Event
*/
class CommandSucceededEvent {
/**
* Create a succeeded event
*
* @internal
* @param pool - the pool that originated the command
* @param command - the command
* @param reply - the reply for this command from the server
* @param started - a high resolution tuple timestamp of when the command was first sent, to calculate duration
*/
constructor(connection, command, reply, started) {
const cmd = extractCommand(command);
const commandName = extractCommandName(cmd);
const { address, connectionId, serviceId } = extractConnectionDetails(connection);
this.address = address;
this.connectionId = connectionId;
this.serviceId = serviceId;
this.requestId = command.requestId;
this.commandName = commandName;
this.duration = (0, utils_1.calculateDurationInMs)(started);
this.reply = maybeRedact(commandName, cmd, extractReply(command, reply));
}
/* @internal */
get hasServiceId() {
return !!this.serviceId;
}
}
exports.CommandSucceededEvent = CommandSucceededEvent;
/**
* An event indicating the failure of a given command
* @public
* @category Event
*/
class CommandFailedEvent {
/**
* Create a failure event
*
* @internal
* @param pool - the pool that originated the command
* @param command - the command
* @param error - the generated error or a server error response
* @param started - a high resolution tuple timestamp of when the command was first sent, to calculate duration
*/
constructor(connection, command, error, started) {
const cmd = extractCommand(command);
const commandName = extractCommandName(cmd);
const { address, connectionId, serviceId } = extractConnectionDetails(connection);
this.address = address;
this.connectionId = connectionId;
this.serviceId = serviceId;
this.requestId = command.requestId;
this.commandName = commandName;
this.duration = (0, utils_1.calculateDurationInMs)(started);
this.failure = maybeRedact(commandName, cmd, error);
}
/* @internal */
get hasServiceId() {
return !!this.serviceId;
}
}
exports.CommandFailedEvent = CommandFailedEvent;
/** Commands that we want to redact because of the sensitive nature of their contents */
const SENSITIVE_COMMANDS = new Set([
'authenticate',
'saslStart',
'saslContinue',
'getnonce',
'createUser',
'updateUser',
'copydbgetnonce',
'copydbsaslstart',
'copydb'
]);
const HELLO_COMMANDS = new Set(['hello', constants_1.LEGACY_HELLO_COMMAND, constants_1.LEGACY_HELLO_COMMAND_CAMEL_CASE]);
// helper methods
const extractCommandName = (commandDoc) => Object.keys(commandDoc)[0];
const namespace = (command) => command.ns;
const databaseName = (command) => command.ns.split('.')[0];
const collectionName = (command) => command.ns.split('.')[1];
const maybeRedact = (commandName, commandDoc, result) => SENSITIVE_COMMANDS.has(commandName) ||
(HELLO_COMMANDS.has(commandName) && commandDoc.speculativeAuthenticate)
? {}
: result;
const LEGACY_FIND_QUERY_MAP = {
$query: 'filter',
$orderby: 'sort',
$hint: 'hint',
$comment: 'comment',
$maxScan: 'maxScan',
$max: 'max',
$min: 'min',
$returnKey: 'returnKey',
$showDiskLoc: 'showRecordId',
$maxTimeMS: 'maxTimeMS',
$snapshot: 'snapshot'
};
const LEGACY_FIND_OPTIONS_MAP = {
numberToSkip: 'skip',
numberToReturn: 'batchSize',
returnFieldSelector: 'projection'
};
const OP_QUERY_KEYS = [
'tailable',
'oplogReplay',
'noCursorTimeout',
'awaitData',
'partial',
'exhaust'
];
/** Extract the actual command from the query, possibly up-converting if it's a legacy format */
function extractCommand(command) {
if (command instanceof commands_1.Msg) {
return (0, utils_1.deepCopy)(command.command);
}
if (command.query?.$query) {
let result;
if (command.ns === 'admin.$cmd') {
// up-convert legacy command
result = Object.assign({}, command.query.$query);
}
else {
// up-convert legacy find command
result = { find: collectionName(command) };
Object.keys(LEGACY_FIND_QUERY_MAP).forEach(key => {
if (command.query[key] != null) {
result[LEGACY_FIND_QUERY_MAP[key]] = (0, utils_1.deepCopy)(command.query[key]);
}
});
}
Object.keys(LEGACY_FIND_OPTIONS_MAP).forEach(key => {
const legacyKey = key;
if (command[legacyKey] != null) {
result[LEGACY_FIND_OPTIONS_MAP[legacyKey]] = (0, utils_1.deepCopy)(command[legacyKey]);
}
});
OP_QUERY_KEYS.forEach(key => {
if (command[key]) {
result[key] = command[key];
}
});
if (command.pre32Limit != null) {
result.limit = command.pre32Limit;
}
if (command.query.$explain) {
return { explain: result };
}
return result;
}
const clonedQuery = {};
const clonedCommand = {};
if (command.query) {
for (const k in command.query) {
clonedQuery[k] = (0, utils_1.deepCopy)(command.query[k]);
}
clonedCommand.query = clonedQuery;
}
for (const k in command) {
if (k === 'query')
continue;
clonedCommand[k] = (0, utils_1.deepCopy)(command[k]);
}
return command.query ? clonedQuery : clonedCommand;
}
function extractReply(command, reply) {
if (!reply) {
return reply;
}
if (command instanceof commands_1.Msg) {
return (0, utils_1.deepCopy)(reply.result ? reply.result : reply);
}
// is this a legacy find command?
if (command.query && command.query.$query != null) {
return {
ok: 1,
cursor: {
id: (0, utils_1.deepCopy)(reply.cursorId),
ns: namespace(command),
firstBatch: (0, utils_1.deepCopy)(reply.documents)
}
};
}
return (0, utils_1.deepCopy)(reply.result ? reply.result : reply);
}
function extractConnectionDetails(connection) {
let connectionId;
if ('id' in connection) {
connectionId = connection.id;
}
return {
address: connection.address,
serviceId: connection.serviceId,
connectionId
};
}
//# sourceMappingURL=command_monitoring_events.js.map

File diff suppressed because one or more lines are too long

487
node_modules/mongodb/lib/cmap/commands.js generated vendored Normal file
View file

@ -0,0 +1,487 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.BinMsg = exports.Msg = exports.Response = exports.Query = void 0;
const BSON = require("../bson");
const error_1 = require("../error");
const read_preference_1 = require("../read_preference");
const utils_1 = require("../utils");
const constants_1 = require("./wire_protocol/constants");
// Incrementing request id
let _requestId = 0;
// Query flags
const OPTS_TAILABLE_CURSOR = 2;
const OPTS_SECONDARY = 4;
const OPTS_OPLOG_REPLAY = 8;
const OPTS_NO_CURSOR_TIMEOUT = 16;
const OPTS_AWAIT_DATA = 32;
const OPTS_EXHAUST = 64;
const OPTS_PARTIAL = 128;
// Response flags
const CURSOR_NOT_FOUND = 1;
const QUERY_FAILURE = 2;
const SHARD_CONFIG_STALE = 4;
const AWAIT_CAPABLE = 8;
/**************************************************************
* QUERY
**************************************************************/
/** @internal */
class Query {
constructor(ns, query, options) {
// Basic options needed to be passed in
// TODO(NODE-3483): Replace with MongoCommandError
if (ns == null)
throw new error_1.MongoRuntimeError('Namespace must be specified for query');
// TODO(NODE-3483): Replace with MongoCommandError
if (query == null)
throw new error_1.MongoRuntimeError('A query document must be specified for query');
// Validate that we are not passing 0x00 in the collection name
if (ns.indexOf('\x00') !== -1) {
// TODO(NODE-3483): Use MongoNamespace static method
throw new error_1.MongoRuntimeError('Namespace cannot contain a null character');
}
// Basic options
this.ns = ns;
this.query = query;
// Additional options
this.numberToSkip = options.numberToSkip || 0;
this.numberToReturn = options.numberToReturn || 0;
this.returnFieldSelector = options.returnFieldSelector || undefined;
this.requestId = Query.getRequestId();
// special case for pre-3.2 find commands, delete ASAP
this.pre32Limit = options.pre32Limit;
// Serialization option
this.serializeFunctions =
typeof options.serializeFunctions === 'boolean' ? options.serializeFunctions : false;
this.ignoreUndefined =
typeof options.ignoreUndefined === 'boolean' ? options.ignoreUndefined : false;
this.maxBsonSize = options.maxBsonSize || 1024 * 1024 * 16;
this.checkKeys = typeof options.checkKeys === 'boolean' ? options.checkKeys : false;
this.batchSize = this.numberToReturn;
// Flags
this.tailable = false;
this.secondaryOk = typeof options.secondaryOk === 'boolean' ? options.secondaryOk : false;
this.oplogReplay = false;
this.noCursorTimeout = false;
this.awaitData = false;
this.exhaust = false;
this.partial = false;
}
/** Assign next request Id. */
incRequestId() {
this.requestId = _requestId++;
}
/** Peek next request Id. */
nextRequestId() {
return _requestId + 1;
}
/** Increment then return next request Id. */
static getRequestId() {
return ++_requestId;
}
// Uses a single allocated buffer for the process, avoiding multiple memory allocations
toBin() {
const buffers = [];
let projection = null;
// Set up the flags
let flags = 0;
if (this.tailable) {
flags |= OPTS_TAILABLE_CURSOR;
}
if (this.secondaryOk) {
flags |= OPTS_SECONDARY;
}
if (this.oplogReplay) {
flags |= OPTS_OPLOG_REPLAY;
}
if (this.noCursorTimeout) {
flags |= OPTS_NO_CURSOR_TIMEOUT;
}
if (this.awaitData) {
flags |= OPTS_AWAIT_DATA;
}
if (this.exhaust) {
flags |= OPTS_EXHAUST;
}
if (this.partial) {
flags |= OPTS_PARTIAL;
}
// If batchSize is different to this.numberToReturn
if (this.batchSize !== this.numberToReturn)
this.numberToReturn = this.batchSize;
// Allocate write protocol header buffer
const header = Buffer.alloc(4 * 4 + // Header
4 + // Flags
Buffer.byteLength(this.ns) +
1 + // namespace
4 + // numberToSkip
4 // numberToReturn
);
// Add header to buffers
buffers.push(header);
// Serialize the query
const query = BSON.serialize(this.query, {
checkKeys: this.checkKeys,
serializeFunctions: this.serializeFunctions,
ignoreUndefined: this.ignoreUndefined
});
// Add query document
buffers.push(query);
if (this.returnFieldSelector && Object.keys(this.returnFieldSelector).length > 0) {
// Serialize the projection document
projection = BSON.serialize(this.returnFieldSelector, {
checkKeys: this.checkKeys,
serializeFunctions: this.serializeFunctions,
ignoreUndefined: this.ignoreUndefined
});
// Add projection document
buffers.push(projection);
}
// Total message size
const totalLength = header.length + query.length + (projection ? projection.length : 0);
// Set up the index
let index = 4;
// Write total document length
header[3] = (totalLength >> 24) & 0xff;
header[2] = (totalLength >> 16) & 0xff;
header[1] = (totalLength >> 8) & 0xff;
header[0] = totalLength & 0xff;
// Write header information requestId
header[index + 3] = (this.requestId >> 24) & 0xff;
header[index + 2] = (this.requestId >> 16) & 0xff;
header[index + 1] = (this.requestId >> 8) & 0xff;
header[index] = this.requestId & 0xff;
index = index + 4;
// Write header information responseTo
header[index + 3] = (0 >> 24) & 0xff;
header[index + 2] = (0 >> 16) & 0xff;
header[index + 1] = (0 >> 8) & 0xff;
header[index] = 0 & 0xff;
index = index + 4;
// Write header information OP_QUERY
header[index + 3] = (constants_1.OP_QUERY >> 24) & 0xff;
header[index + 2] = (constants_1.OP_QUERY >> 16) & 0xff;
header[index + 1] = (constants_1.OP_QUERY >> 8) & 0xff;
header[index] = constants_1.OP_QUERY & 0xff;
index = index + 4;
// Write header information flags
header[index + 3] = (flags >> 24) & 0xff;
header[index + 2] = (flags >> 16) & 0xff;
header[index + 1] = (flags >> 8) & 0xff;
header[index] = flags & 0xff;
index = index + 4;
// Write collection name
index = index + header.write(this.ns, index, 'utf8') + 1;
header[index - 1] = 0;
// Write header information flags numberToSkip
header[index + 3] = (this.numberToSkip >> 24) & 0xff;
header[index + 2] = (this.numberToSkip >> 16) & 0xff;
header[index + 1] = (this.numberToSkip >> 8) & 0xff;
header[index] = this.numberToSkip & 0xff;
index = index + 4;
// Write header information flags numberToReturn
header[index + 3] = (this.numberToReturn >> 24) & 0xff;
header[index + 2] = (this.numberToReturn >> 16) & 0xff;
header[index + 1] = (this.numberToReturn >> 8) & 0xff;
header[index] = this.numberToReturn & 0xff;
index = index + 4;
// Return the buffers
return buffers;
}
}
exports.Query = Query;
/** @internal */
class Response {
constructor(message, msgHeader, msgBody, opts) {
this.documents = new Array(0);
this.parsed = false;
this.raw = message;
this.data = msgBody;
this.opts = opts ?? {
useBigInt64: false,
promoteLongs: true,
promoteValues: true,
promoteBuffers: false,
bsonRegExp: false
};
// Read the message header
this.length = msgHeader.length;
this.requestId = msgHeader.requestId;
this.responseTo = msgHeader.responseTo;
this.opCode = msgHeader.opCode;
this.fromCompressed = msgHeader.fromCompressed;
// Flag values
this.useBigInt64 = typeof this.opts.useBigInt64 === 'boolean' ? this.opts.useBigInt64 : false;
this.promoteLongs = typeof this.opts.promoteLongs === 'boolean' ? this.opts.promoteLongs : true;
this.promoteValues =
typeof this.opts.promoteValues === 'boolean' ? this.opts.promoteValues : true;
this.promoteBuffers =
typeof this.opts.promoteBuffers === 'boolean' ? this.opts.promoteBuffers : false;
this.bsonRegExp = typeof this.opts.bsonRegExp === 'boolean' ? this.opts.bsonRegExp : false;
}
isParsed() {
return this.parsed;
}
parse(options) {
// Don't parse again if not needed
if (this.parsed)
return;
options = options ?? {};
// Allow the return of raw documents instead of parsing
const raw = options.raw || false;
const documentsReturnedIn = options.documentsReturnedIn || null;
const useBigInt64 = options.useBigInt64 ?? this.opts.useBigInt64;
const promoteLongs = options.promoteLongs ?? this.opts.promoteLongs;
const promoteValues = options.promoteValues ?? this.opts.promoteValues;
const promoteBuffers = options.promoteBuffers ?? this.opts.promoteBuffers;
const bsonRegExp = options.bsonRegExp ?? this.opts.bsonRegExp;
let bsonSize;
// Set up the options
const _options = {
useBigInt64,
promoteLongs,
promoteValues,
promoteBuffers,
bsonRegExp
};
// Position within OP_REPLY at which documents start
// (See https://docs.mongodb.com/manual/reference/mongodb-wire-protocol/#wire-op-reply)
this.index = 20;
// Read the message body
this.responseFlags = this.data.readInt32LE(0);
this.cursorId = new BSON.Long(this.data.readInt32LE(4), this.data.readInt32LE(8));
this.startingFrom = this.data.readInt32LE(12);
this.numberReturned = this.data.readInt32LE(16);
// Preallocate document array
this.documents = new Array(this.numberReturned);
this.cursorNotFound = (this.responseFlags & CURSOR_NOT_FOUND) !== 0;
this.queryFailure = (this.responseFlags & QUERY_FAILURE) !== 0;
this.shardConfigStale = (this.responseFlags & SHARD_CONFIG_STALE) !== 0;
this.awaitCapable = (this.responseFlags & AWAIT_CAPABLE) !== 0;
// Parse Body
for (let i = 0; i < this.numberReturned; i++) {
bsonSize =
this.data[this.index] |
(this.data[this.index + 1] << 8) |
(this.data[this.index + 2] << 16) |
(this.data[this.index + 3] << 24);
// If we have raw results specified slice the return document
if (raw) {
this.documents[i] = this.data.slice(this.index, this.index + bsonSize);
}
else {
this.documents[i] = BSON.deserialize(this.data.slice(this.index, this.index + bsonSize), _options);
}
// Adjust the index
this.index = this.index + bsonSize;
}
if (this.documents.length === 1 && documentsReturnedIn != null && raw) {
const fieldsAsRaw = {};
fieldsAsRaw[documentsReturnedIn] = true;
_options.fieldsAsRaw = fieldsAsRaw;
const doc = BSON.deserialize(this.documents[0], _options);
this.documents = [doc];
}
// Set parsed
this.parsed = true;
}
}
exports.Response = Response;
// Implementation of OP_MSG spec:
// https://github.com/mongodb/specifications/blob/master/source/message/OP_MSG.rst
//
// struct Section {
// uint8 payloadType;
// union payload {
// document document; // payloadType == 0
// struct sequence { // payloadType == 1
// int32 size;
// cstring identifier;
// document* documents;
// };
// };
// };
// struct OP_MSG {
// struct MsgHeader {
// int32 messageLength;
// int32 requestID;
// int32 responseTo;
// int32 opCode = 2013;
// };
// uint32 flagBits;
// Section+ sections;
// [uint32 checksum;]
// };
// Msg Flags
const OPTS_CHECKSUM_PRESENT = 1;
const OPTS_MORE_TO_COME = 2;
const OPTS_EXHAUST_ALLOWED = 1 << 16;
/** @internal */
class Msg {
constructor(ns, command, options) {
// Basic options needed to be passed in
if (command == null)
throw new error_1.MongoInvalidArgumentError('Query document must be specified for query');
// Basic options
this.ns = ns;
this.command = command;
this.command.$db = (0, utils_1.databaseNamespace)(ns);
if (options.readPreference && options.readPreference.mode !== read_preference_1.ReadPreference.PRIMARY) {
this.command.$readPreference = options.readPreference.toJSON();
}
// Ensure empty options
this.options = options ?? {};
// Additional options
this.requestId = options.requestId ? options.requestId : Msg.getRequestId();
// Serialization option
this.serializeFunctions =
typeof options.serializeFunctions === 'boolean' ? options.serializeFunctions : false;
this.ignoreUndefined =
typeof options.ignoreUndefined === 'boolean' ? options.ignoreUndefined : false;
this.checkKeys = typeof options.checkKeys === 'boolean' ? options.checkKeys : false;
this.maxBsonSize = options.maxBsonSize || 1024 * 1024 * 16;
// flags
this.checksumPresent = false;
this.moreToCome = options.moreToCome || false;
this.exhaustAllowed =
typeof options.exhaustAllowed === 'boolean' ? options.exhaustAllowed : false;
}
toBin() {
const buffers = [];
let flags = 0;
if (this.checksumPresent) {
flags |= OPTS_CHECKSUM_PRESENT;
}
if (this.moreToCome) {
flags |= OPTS_MORE_TO_COME;
}
if (this.exhaustAllowed) {
flags |= OPTS_EXHAUST_ALLOWED;
}
const header = Buffer.alloc(4 * 4 + // Header
4 // Flags
);
buffers.push(header);
let totalLength = header.length;
const command = this.command;
totalLength += this.makeDocumentSegment(buffers, command);
header.writeInt32LE(totalLength, 0); // messageLength
header.writeInt32LE(this.requestId, 4); // requestID
header.writeInt32LE(0, 8); // responseTo
header.writeInt32LE(constants_1.OP_MSG, 12); // opCode
header.writeUInt32LE(flags, 16); // flags
return buffers;
}
makeDocumentSegment(buffers, document) {
const payloadTypeBuffer = Buffer.alloc(1);
payloadTypeBuffer[0] = 0;
const documentBuffer = this.serializeBson(document);
buffers.push(payloadTypeBuffer);
buffers.push(documentBuffer);
return payloadTypeBuffer.length + documentBuffer.length;
}
serializeBson(document) {
return BSON.serialize(document, {
checkKeys: this.checkKeys,
serializeFunctions: this.serializeFunctions,
ignoreUndefined: this.ignoreUndefined
});
}
static getRequestId() {
_requestId = (_requestId + 1) & 0x7fffffff;
return _requestId;
}
}
exports.Msg = Msg;
/** @internal */
class BinMsg {
constructor(message, msgHeader, msgBody, opts) {
this.parsed = false;
this.raw = message;
this.data = msgBody;
this.opts = opts ?? {
useBigInt64: false,
promoteLongs: true,
promoteValues: true,
promoteBuffers: false,
bsonRegExp: false
};
// Read the message header
this.length = msgHeader.length;
this.requestId = msgHeader.requestId;
this.responseTo = msgHeader.responseTo;
this.opCode = msgHeader.opCode;
this.fromCompressed = msgHeader.fromCompressed;
// Read response flags
this.responseFlags = msgBody.readInt32LE(0);
this.checksumPresent = (this.responseFlags & OPTS_CHECKSUM_PRESENT) !== 0;
this.moreToCome = (this.responseFlags & OPTS_MORE_TO_COME) !== 0;
this.exhaustAllowed = (this.responseFlags & OPTS_EXHAUST_ALLOWED) !== 0;
this.useBigInt64 = typeof this.opts.useBigInt64 === 'boolean' ? this.opts.useBigInt64 : false;
this.promoteLongs = typeof this.opts.promoteLongs === 'boolean' ? this.opts.promoteLongs : true;
this.promoteValues =
typeof this.opts.promoteValues === 'boolean' ? this.opts.promoteValues : true;
this.promoteBuffers =
typeof this.opts.promoteBuffers === 'boolean' ? this.opts.promoteBuffers : false;
this.bsonRegExp = typeof this.opts.bsonRegExp === 'boolean' ? this.opts.bsonRegExp : false;
this.documents = [];
}
isParsed() {
return this.parsed;
}
parse(options) {
// Don't parse again if not needed
if (this.parsed)
return;
options = options ?? {};
this.index = 4;
// Allow the return of raw documents instead of parsing
const raw = options.raw || false;
const documentsReturnedIn = options.documentsReturnedIn || null;
const useBigInt64 = options.useBigInt64 ?? this.opts.useBigInt64;
const promoteLongs = options.promoteLongs ?? this.opts.promoteLongs;
const promoteValues = options.promoteValues ?? this.opts.promoteValues;
const promoteBuffers = options.promoteBuffers ?? this.opts.promoteBuffers;
const bsonRegExp = options.bsonRegExp ?? this.opts.bsonRegExp;
const validation = this.parseBsonSerializationOptions(options);
// Set up the options
const bsonOptions = {
useBigInt64,
promoteLongs,
promoteValues,
promoteBuffers,
bsonRegExp,
validation
// Due to the strictness of the BSON libraries validation option we need this cast
};
while (this.index < this.data.length) {
const payloadType = this.data.readUInt8(this.index++);
if (payloadType === 0) {
const bsonSize = this.data.readUInt32LE(this.index);
const bin = this.data.slice(this.index, this.index + bsonSize);
this.documents.push(raw ? bin : BSON.deserialize(bin, bsonOptions));
this.index += bsonSize;
}
else if (payloadType === 1) {
// It was decided that no driver makes use of payload type 1
// TODO(NODE-3483): Replace with MongoDeprecationError
throw new error_1.MongoRuntimeError('OP_MSG Payload Type 1 detected unsupported protocol');
}
}
if (this.documents.length === 1 && documentsReturnedIn != null && raw) {
const fieldsAsRaw = {};
fieldsAsRaw[documentsReturnedIn] = true;
bsonOptions.fieldsAsRaw = fieldsAsRaw;
const doc = BSON.deserialize(this.documents[0], bsonOptions);
this.documents = [doc];
}
this.parsed = true;
}
parseBsonSerializationOptions({ enableUtf8Validation }) {
if (enableUtf8Validation === false) {
return { utf8: false };
}
return { utf8: { writeErrors: false } };
}
}
exports.BinMsg = BinMsg;
//# sourceMappingURL=commands.js.map

1
node_modules/mongodb/lib/cmap/commands.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

394
node_modules/mongodb/lib/cmap/connect.js generated vendored Normal file
View file

@ -0,0 +1,394 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.LEGAL_TCP_SOCKET_OPTIONS = exports.LEGAL_TLS_SOCKET_OPTIONS = exports.prepareHandshakeDocument = exports.connect = void 0;
const net = require("net");
const socks_1 = require("socks");
const tls = require("tls");
const bson_1 = require("../bson");
const constants_1 = require("../constants");
const error_1 = require("../error");
const utils_1 = require("../utils");
const auth_provider_1 = require("./auth/auth_provider");
const gssapi_1 = require("./auth/gssapi");
const mongocr_1 = require("./auth/mongocr");
const mongodb_aws_1 = require("./auth/mongodb_aws");
const plain_1 = require("./auth/plain");
const providers_1 = require("./auth/providers");
const scram_1 = require("./auth/scram");
const x509_1 = require("./auth/x509");
const connection_1 = require("./connection");
const constants_2 = require("./wire_protocol/constants");
const AUTH_PROVIDERS = new Map([
[providers_1.AuthMechanism.MONGODB_AWS, new mongodb_aws_1.MongoDBAWS()],
[providers_1.AuthMechanism.MONGODB_CR, new mongocr_1.MongoCR()],
[providers_1.AuthMechanism.MONGODB_GSSAPI, new gssapi_1.GSSAPI()],
[providers_1.AuthMechanism.MONGODB_PLAIN, new plain_1.Plain()],
[providers_1.AuthMechanism.MONGODB_SCRAM_SHA1, new scram_1.ScramSHA1()],
[providers_1.AuthMechanism.MONGODB_SCRAM_SHA256, new scram_1.ScramSHA256()],
[providers_1.AuthMechanism.MONGODB_X509, new x509_1.X509()]
]);
function connect(options, callback) {
makeConnection({ ...options, existingSocket: undefined }, (err, socket) => {
if (err || !socket) {
return callback(err);
}
let ConnectionType = options.connectionType ?? connection_1.Connection;
if (options.autoEncrypter) {
ConnectionType = connection_1.CryptoConnection;
}
performInitialHandshake(new ConnectionType(socket, options), options, callback);
});
}
exports.connect = connect;
function checkSupportedServer(hello, options) {
const serverVersionHighEnough = hello &&
(typeof hello.maxWireVersion === 'number' || hello.maxWireVersion instanceof bson_1.Int32) &&
hello.maxWireVersion >= constants_2.MIN_SUPPORTED_WIRE_VERSION;
const serverVersionLowEnough = hello &&
(typeof hello.minWireVersion === 'number' || hello.minWireVersion instanceof bson_1.Int32) &&
hello.minWireVersion <= constants_2.MAX_SUPPORTED_WIRE_VERSION;
if (serverVersionHighEnough) {
if (serverVersionLowEnough) {
return null;
}
const message = `Server at ${options.hostAddress} reports minimum wire version ${JSON.stringify(hello.minWireVersion)}, but this version of the Node.js Driver requires at most ${constants_2.MAX_SUPPORTED_WIRE_VERSION} (MongoDB ${constants_2.MAX_SUPPORTED_SERVER_VERSION})`;
return new error_1.MongoCompatibilityError(message);
}
const message = `Server at ${options.hostAddress} reports maximum wire version ${JSON.stringify(hello.maxWireVersion) ?? 0}, but this version of the Node.js Driver requires at least ${constants_2.MIN_SUPPORTED_WIRE_VERSION} (MongoDB ${constants_2.MIN_SUPPORTED_SERVER_VERSION})`;
return new error_1.MongoCompatibilityError(message);
}
function performInitialHandshake(conn, options, _callback) {
const callback = function (err, ret) {
if (err && conn) {
conn.destroy({ force: false });
}
_callback(err, ret);
};
const credentials = options.credentials;
if (credentials) {
if (!(credentials.mechanism === providers_1.AuthMechanism.MONGODB_DEFAULT) &&
!AUTH_PROVIDERS.get(credentials.mechanism)) {
callback(new error_1.MongoInvalidArgumentError(`AuthMechanism '${credentials.mechanism}' not supported`));
return;
}
}
const authContext = new auth_provider_1.AuthContext(conn, credentials, options);
prepareHandshakeDocument(authContext, (err, handshakeDoc) => {
if (err || !handshakeDoc) {
return callback(err);
}
const handshakeOptions = Object.assign({}, options);
if (typeof options.connectTimeoutMS === 'number') {
// The handshake technically is a monitoring check, so its socket timeout should be connectTimeoutMS
handshakeOptions.socketTimeoutMS = options.connectTimeoutMS;
}
const start = new Date().getTime();
conn.command((0, utils_1.ns)('admin.$cmd'), handshakeDoc, handshakeOptions, (err, response) => {
if (err) {
callback(err);
return;
}
if (response?.ok === 0) {
callback(new error_1.MongoServerError(response));
return;
}
if (!('isWritablePrimary' in response)) {
// Provide hello-style response document.
response.isWritablePrimary = response[constants_1.LEGACY_HELLO_COMMAND];
}
if (response.helloOk) {
conn.helloOk = true;
}
const supportedServerErr = checkSupportedServer(response, options);
if (supportedServerErr) {
callback(supportedServerErr);
return;
}
if (options.loadBalanced) {
if (!response.serviceId) {
return callback(new error_1.MongoCompatibilityError('Driver attempted to initialize in load balancing mode, ' +
'but the server does not support this mode.'));
}
}
// NOTE: This is metadata attached to the connection while porting away from
// handshake being done in the `Server` class. Likely, it should be
// relocated, or at very least restructured.
conn.hello = response;
conn.lastHelloMS = new Date().getTime() - start;
if (!response.arbiterOnly && credentials) {
// store the response on auth context
authContext.response = response;
const resolvedCredentials = credentials.resolveAuthMechanism(response);
const provider = AUTH_PROVIDERS.get(resolvedCredentials.mechanism);
if (!provider) {
return callback(new error_1.MongoInvalidArgumentError(`No AuthProvider for ${resolvedCredentials.mechanism} defined.`));
}
provider.auth(authContext, err => {
if (err) {
if (err instanceof error_1.MongoError) {
err.addErrorLabel(error_1.MongoErrorLabel.HandshakeError);
if ((0, error_1.needsRetryableWriteLabel)(err, response.maxWireVersion)) {
err.addErrorLabel(error_1.MongoErrorLabel.RetryableWriteError);
}
}
return callback(err);
}
callback(undefined, conn);
});
return;
}
callback(undefined, conn);
});
});
}
/**
* @internal
*
* This function is only exposed for testing purposes.
*/
function prepareHandshakeDocument(authContext, callback) {
const options = authContext.options;
const compressors = options.compressors ? options.compressors : [];
const { serverApi } = authContext.connection;
const handshakeDoc = {
[serverApi?.version ? 'hello' : constants_1.LEGACY_HELLO_COMMAND]: 1,
helloOk: true,
client: options.metadata || (0, utils_1.makeClientMetadata)(options),
compression: compressors
};
if (options.loadBalanced === true) {
handshakeDoc.loadBalanced = true;
}
const credentials = authContext.credentials;
if (credentials) {
if (credentials.mechanism === providers_1.AuthMechanism.MONGODB_DEFAULT && credentials.username) {
handshakeDoc.saslSupportedMechs = `${credentials.source}.${credentials.username}`;
const provider = AUTH_PROVIDERS.get(providers_1.AuthMechanism.MONGODB_SCRAM_SHA256);
if (!provider) {
// This auth mechanism is always present.
return callback(new error_1.MongoInvalidArgumentError(`No AuthProvider for ${providers_1.AuthMechanism.MONGODB_SCRAM_SHA256} defined.`));
}
return provider.prepare(handshakeDoc, authContext, callback);
}
const provider = AUTH_PROVIDERS.get(credentials.mechanism);
if (!provider) {
return callback(new error_1.MongoInvalidArgumentError(`No AuthProvider for ${credentials.mechanism} defined.`));
}
return provider.prepare(handshakeDoc, authContext, callback);
}
callback(undefined, handshakeDoc);
}
exports.prepareHandshakeDocument = prepareHandshakeDocument;
/** @public */
exports.LEGAL_TLS_SOCKET_OPTIONS = [
'ALPNProtocols',
'ca',
'cert',
'checkServerIdentity',
'ciphers',
'crl',
'ecdhCurve',
'key',
'minDHSize',
'passphrase',
'pfx',
'rejectUnauthorized',
'secureContext',
'secureProtocol',
'servername',
'session'
];
/** @public */
exports.LEGAL_TCP_SOCKET_OPTIONS = [
'family',
'hints',
'localAddress',
'localPort',
'lookup'
];
function parseConnectOptions(options) {
const hostAddress = options.hostAddress;
if (!hostAddress)
throw new error_1.MongoInvalidArgumentError('Option "hostAddress" is required');
const result = {};
for (const name of exports.LEGAL_TCP_SOCKET_OPTIONS) {
if (options[name] != null) {
result[name] = options[name];
}
}
if (typeof hostAddress.socketPath === 'string') {
result.path = hostAddress.socketPath;
return result;
}
else if (typeof hostAddress.host === 'string') {
result.host = hostAddress.host;
result.port = hostAddress.port;
return result;
}
else {
// This should never happen since we set up HostAddresses
// But if we don't throw here the socket could hang until timeout
// TODO(NODE-3483)
throw new error_1.MongoRuntimeError(`Unexpected HostAddress ${JSON.stringify(hostAddress)}`);
}
}
function parseSslOptions(options) {
const result = parseConnectOptions(options);
// Merge in valid SSL options
for (const name of exports.LEGAL_TLS_SOCKET_OPTIONS) {
if (options[name] != null) {
result[name] = options[name];
}
}
if (options.existingSocket) {
result.socket = options.existingSocket;
}
// Set default sni servername to be the same as host
if (result.servername == null && result.host && !net.isIP(result.host)) {
result.servername = result.host;
}
return result;
}
const SOCKET_ERROR_EVENT_LIST = ['error', 'close', 'timeout', 'parseError'];
const SOCKET_ERROR_EVENTS = new Set(SOCKET_ERROR_EVENT_LIST);
function makeConnection(options, _callback) {
const useTLS = options.tls ?? false;
const keepAlive = options.keepAlive ?? true;
const socketTimeoutMS = options.socketTimeoutMS ?? Reflect.get(options, 'socketTimeout') ?? 0;
const noDelay = options.noDelay ?? true;
const connectTimeoutMS = options.connectTimeoutMS ?? 30000;
const rejectUnauthorized = options.rejectUnauthorized ?? true;
const keepAliveInitialDelay = ((options.keepAliveInitialDelay ?? 120000) > socketTimeoutMS
? Math.round(socketTimeoutMS / 2)
: options.keepAliveInitialDelay) ?? 120000;
const existingSocket = options.existingSocket;
let socket;
const callback = function (err, ret) {
if (err && socket) {
socket.destroy();
}
_callback(err, ret);
};
if (options.proxyHost != null) {
// Currently, only Socks5 is supported.
return makeSocks5Connection({
...options,
connectTimeoutMS // Should always be present for Socks5
}, callback);
}
if (useTLS) {
const tlsSocket = tls.connect(parseSslOptions(options));
if (typeof tlsSocket.disableRenegotiation === 'function') {
tlsSocket.disableRenegotiation();
}
socket = tlsSocket;
}
else if (existingSocket) {
// In the TLS case, parseSslOptions() sets options.socket to existingSocket,
// so we only need to handle the non-TLS case here (where existingSocket
// gives us all we need out of the box).
socket = existingSocket;
}
else {
socket = net.createConnection(parseConnectOptions(options));
}
socket.setKeepAlive(keepAlive, keepAliveInitialDelay);
socket.setTimeout(connectTimeoutMS);
socket.setNoDelay(noDelay);
const connectEvent = useTLS ? 'secureConnect' : 'connect';
let cancellationHandler;
function errorHandler(eventName) {
return (err) => {
SOCKET_ERROR_EVENTS.forEach(event => socket.removeAllListeners(event));
if (cancellationHandler && options.cancellationToken) {
options.cancellationToken.removeListener('cancel', cancellationHandler);
}
socket.removeListener(connectEvent, connectHandler);
callback(connectionFailureError(eventName, err));
};
}
function connectHandler() {
SOCKET_ERROR_EVENTS.forEach(event => socket.removeAllListeners(event));
if (cancellationHandler && options.cancellationToken) {
options.cancellationToken.removeListener('cancel', cancellationHandler);
}
if ('authorizationError' in socket) {
if (socket.authorizationError && rejectUnauthorized) {
return callback(socket.authorizationError);
}
}
socket.setTimeout(socketTimeoutMS);
callback(undefined, socket);
}
SOCKET_ERROR_EVENTS.forEach(event => socket.once(event, errorHandler(event)));
if (options.cancellationToken) {
cancellationHandler = errorHandler('cancel');
options.cancellationToken.once('cancel', cancellationHandler);
}
if (existingSocket) {
process.nextTick(connectHandler);
}
else {
socket.once(connectEvent, connectHandler);
}
}
function makeSocks5Connection(options, callback) {
const hostAddress = utils_1.HostAddress.fromHostPort(options.proxyHost ?? '', // proxyHost is guaranteed to set here
options.proxyPort ?? 1080);
// First, connect to the proxy server itself:
makeConnection({
...options,
hostAddress,
tls: false,
proxyHost: undefined
}, (err, rawSocket) => {
if (err) {
return callback(err);
}
const destination = parseConnectOptions(options);
if (typeof destination.host !== 'string' || typeof destination.port !== 'number') {
return callback(new error_1.MongoInvalidArgumentError('Can only make Socks5 connections to TCP hosts'));
}
// Then, establish the Socks5 proxy connection:
socks_1.SocksClient.createConnection({
existing_socket: rawSocket,
timeout: options.connectTimeoutMS,
command: 'connect',
destination: {
host: destination.host,
port: destination.port
},
proxy: {
// host and port are ignored because we pass existing_socket
host: 'iLoveJavaScript',
port: 0,
type: 5,
userId: options.proxyUsername || undefined,
password: options.proxyPassword || undefined
}
}).then(({ socket }) => {
// Finally, now treat the resulting duplex stream as the
// socket over which we send and receive wire protocol messages:
makeConnection({
...options,
existingSocket: socket,
proxyHost: undefined
}, callback);
}, error => callback(connectionFailureError('error', error)));
});
}
function connectionFailureError(type, err) {
switch (type) {
case 'error':
return new error_1.MongoNetworkError(err);
case 'timeout':
return new error_1.MongoNetworkTimeoutError('connection timed out');
case 'close':
return new error_1.MongoNetworkError('connection closed');
case 'cancel':
return new error_1.MongoNetworkError('connection establishment was cancelled');
default:
return new error_1.MongoNetworkError('unknown network error');
}
}
//# sourceMappingURL=connect.js.map

1
node_modules/mongodb/lib/cmap/connect.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

496
node_modules/mongodb/lib/cmap/connection.js generated vendored Normal file
View file

@ -0,0 +1,496 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.hasSessionSupport = exports.CryptoConnection = exports.Connection = void 0;
const timers_1 = require("timers");
const constants_1 = require("../constants");
const error_1 = require("../error");
const mongo_types_1 = require("../mongo_types");
const sessions_1 = require("../sessions");
const utils_1 = require("../utils");
const command_monitoring_events_1 = require("./command_monitoring_events");
const commands_1 = require("./commands");
const message_stream_1 = require("./message_stream");
const stream_description_1 = require("./stream_description");
const shared_1 = require("./wire_protocol/shared");
/** @internal */
const kStream = Symbol('stream');
/** @internal */
const kQueue = Symbol('queue');
/** @internal */
const kMessageStream = Symbol('messageStream');
/** @internal */
const kGeneration = Symbol('generation');
/** @internal */
const kLastUseTime = Symbol('lastUseTime');
/** @internal */
const kClusterTime = Symbol('clusterTime');
/** @internal */
const kDescription = Symbol('description');
/** @internal */
const kHello = Symbol('hello');
/** @internal */
const kAutoEncrypter = Symbol('autoEncrypter');
/** @internal */
const kDelayedTimeoutId = Symbol('delayedTimeoutId');
const INVALID_QUEUE_SIZE = 'Connection internal queue contains more than 1 operation description';
/** @internal */
class Connection extends mongo_types_1.TypedEventEmitter {
constructor(stream, options) {
super();
this.id = options.id;
this.address = streamIdentifier(stream, options);
this.socketTimeoutMS = options.socketTimeoutMS ?? 0;
this.monitorCommands = options.monitorCommands;
this.serverApi = options.serverApi;
this.closed = false;
this[kHello] = null;
this[kClusterTime] = null;
this[kDescription] = new stream_description_1.StreamDescription(this.address, options);
this[kGeneration] = options.generation;
this[kLastUseTime] = (0, utils_1.now)();
// setup parser stream and message handling
this[kQueue] = new Map();
this[kMessageStream] = new message_stream_1.MessageStream({
...options,
maxBsonMessageSize: this.hello?.maxBsonMessageSize
});
this[kStream] = stream;
this[kDelayedTimeoutId] = null;
this[kMessageStream].on('message', message => this.onMessage(message));
this[kMessageStream].on('error', error => this.onError(error));
this[kStream].on('close', () => this.onClose());
this[kStream].on('timeout', () => this.onTimeout());
this[kStream].on('error', () => {
/* ignore errors, listen to `close` instead */
});
// hook the message stream up to the passed in stream
this[kStream].pipe(this[kMessageStream]);
this[kMessageStream].pipe(this[kStream]);
}
get description() {
return this[kDescription];
}
get hello() {
return this[kHello];
}
// the `connect` method stores the result of the handshake hello on the connection
set hello(response) {
this[kDescription].receiveResponse(response);
this[kDescription] = Object.freeze(this[kDescription]);
// TODO: remove this, and only use the `StreamDescription` in the future
this[kHello] = response;
}
// Set the whether the message stream is for a monitoring connection.
set isMonitoringConnection(value) {
this[kMessageStream].isMonitoringConnection = value;
}
get isMonitoringConnection() {
return this[kMessageStream].isMonitoringConnection;
}
get serviceId() {
return this.hello?.serviceId;
}
get loadBalanced() {
return this.description.loadBalanced;
}
get generation() {
return this[kGeneration] || 0;
}
set generation(generation) {
this[kGeneration] = generation;
}
get idleTime() {
return (0, utils_1.calculateDurationInMs)(this[kLastUseTime]);
}
get clusterTime() {
return this[kClusterTime];
}
get stream() {
return this[kStream];
}
markAvailable() {
this[kLastUseTime] = (0, utils_1.now)();
}
onError(error) {
this.cleanup(true, error);
}
onClose() {
const message = `connection ${this.id} to ${this.address} closed`;
this.cleanup(true, new error_1.MongoNetworkError(message));
}
onTimeout() {
this[kDelayedTimeoutId] = (0, timers_1.setTimeout)(() => {
const message = `connection ${this.id} to ${this.address} timed out`;
const beforeHandshake = this.hello == null;
this.cleanup(true, new error_1.MongoNetworkTimeoutError(message, { beforeHandshake }));
}, 1).unref(); // No need for this timer to hold the event loop open
}
onMessage(message) {
const delayedTimeoutId = this[kDelayedTimeoutId];
if (delayedTimeoutId != null) {
(0, timers_1.clearTimeout)(delayedTimeoutId);
this[kDelayedTimeoutId] = null;
}
// always emit the message, in case we are streaming
this.emit('message', message);
let operationDescription = this[kQueue].get(message.responseTo);
if (!operationDescription && this.isMonitoringConnection) {
// This is how we recover when the initial hello's requestId is not
// the responseTo when hello responses have been skipped:
// First check if the map is of invalid size
if (this[kQueue].size > 1) {
this.cleanup(true, new error_1.MongoRuntimeError(INVALID_QUEUE_SIZE));
}
else {
// Get the first orphaned operation description.
const entry = this[kQueue].entries().next();
if (entry.value != null) {
const [requestId, orphaned] = entry.value;
// If the orphaned operation description exists then set it.
operationDescription = orphaned;
// Remove the entry with the bad request id from the queue.
this[kQueue].delete(requestId);
}
}
}
if (!operationDescription) {
return;
}
const callback = operationDescription.cb;
// SERVER-45775: For exhaust responses we should be able to use the same requestId to
// track response, however the server currently synthetically produces remote requests
// making the `responseTo` change on each response
this[kQueue].delete(message.responseTo);
if ('moreToCome' in message && message.moreToCome) {
// If the operation description check above does find an orphaned
// description and sets the operationDescription then this line will put one
// back in the queue with the correct requestId and will resolve not being able
// to find the next one via the responseTo of the next streaming hello.
this[kQueue].set(message.requestId, operationDescription);
}
else if (operationDescription.socketTimeoutOverride) {
this[kStream].setTimeout(this.socketTimeoutMS);
}
try {
// Pass in the entire description because it has BSON parsing options
message.parse(operationDescription);
}
catch (err) {
// If this error is generated by our own code, it will already have the correct class applied
// if it is not, then it is coming from a catastrophic data parse failure or the BSON library
// in either case, it should not be wrapped
callback(err);
return;
}
if (message.documents[0]) {
const document = message.documents[0];
const session = operationDescription.session;
if (session) {
(0, sessions_1.updateSessionFromResponse)(session, document);
}
if (document.$clusterTime) {
this[kClusterTime] = document.$clusterTime;
this.emit(Connection.CLUSTER_TIME_RECEIVED, document.$clusterTime);
}
if (operationDescription.command) {
if (document.writeConcernError) {
callback(new error_1.MongoWriteConcernError(document.writeConcernError, document), document);
return;
}
if (document.ok === 0 || document.$err || document.errmsg || document.code) {
callback(new error_1.MongoServerError(document));
return;
}
}
else {
// Pre 3.2 support
if (document.ok === 0 || document.$err || document.errmsg) {
callback(new error_1.MongoServerError(document));
return;
}
}
}
callback(undefined, message.documents[0]);
}
destroy(options, callback) {
if (this.closed) {
process.nextTick(() => callback?.());
return;
}
if (typeof callback === 'function') {
this.once('close', () => process.nextTick(() => callback()));
}
// load balanced mode requires that these listeners remain on the connection
// after cleanup on timeouts, errors or close so we remove them before calling
// cleanup.
this.removeAllListeners(Connection.PINNED);
this.removeAllListeners(Connection.UNPINNED);
const message = `connection ${this.id} to ${this.address} closed`;
this.cleanup(options.force, new error_1.MongoNetworkError(message));
}
/**
* A method that cleans up the connection. When `force` is true, this method
* forcibly destroys the socket.
*
* If an error is provided, any in-flight operations will be closed with the error.
*
* This method does nothing if the connection is already closed.
*/
cleanup(force, error) {
if (this.closed) {
return;
}
this.closed = true;
const completeCleanup = () => {
for (const op of this[kQueue].values()) {
op.cb(error);
}
this[kQueue].clear();
this.emit(Connection.CLOSE);
};
this[kStream].removeAllListeners();
this[kMessageStream].removeAllListeners();
this[kMessageStream].destroy();
if (force) {
this[kStream].destroy();
completeCleanup();
return;
}
if (!this[kStream].writableEnded) {
this[kStream].end(() => {
this[kStream].destroy();
completeCleanup();
});
}
else {
completeCleanup();
}
}
command(ns, cmd, options, callback) {
const readPreference = (0, shared_1.getReadPreference)(cmd, options);
const shouldUseOpMsg = supportsOpMsg(this);
const session = options?.session;
let clusterTime = this.clusterTime;
let finalCmd = Object.assign({}, cmd);
if (this.serverApi) {
const { version, strict, deprecationErrors } = this.serverApi;
finalCmd.apiVersion = version;
if (strict != null)
finalCmd.apiStrict = strict;
if (deprecationErrors != null)
finalCmd.apiDeprecationErrors = deprecationErrors;
}
if (hasSessionSupport(this) && session) {
if (session.clusterTime &&
clusterTime &&
session.clusterTime.clusterTime.greaterThan(clusterTime.clusterTime)) {
clusterTime = session.clusterTime;
}
const err = (0, sessions_1.applySession)(session, finalCmd, options);
if (err) {
return callback(err);
}
}
// if we have a known cluster time, gossip it
if (clusterTime) {
finalCmd.$clusterTime = clusterTime;
}
if ((0, shared_1.isSharded)(this) && !shouldUseOpMsg && readPreference && readPreference.mode !== 'primary') {
finalCmd = {
$query: finalCmd,
$readPreference: readPreference.toJSON()
};
}
const commandOptions = Object.assign({
command: true,
numberToSkip: 0,
numberToReturn: -1,
checkKeys: false,
// This value is not overridable
secondaryOk: readPreference.secondaryOk()
}, options);
const cmdNs = `${ns.db}.$cmd`;
const message = shouldUseOpMsg
? new commands_1.Msg(cmdNs, finalCmd, commandOptions)
: new commands_1.Query(cmdNs, finalCmd, commandOptions);
try {
write(this, message, commandOptions, callback);
}
catch (err) {
callback(err);
}
}
}
exports.Connection = Connection;
/** @event */
Connection.COMMAND_STARTED = constants_1.COMMAND_STARTED;
/** @event */
Connection.COMMAND_SUCCEEDED = constants_1.COMMAND_SUCCEEDED;
/** @event */
Connection.COMMAND_FAILED = constants_1.COMMAND_FAILED;
/** @event */
Connection.CLUSTER_TIME_RECEIVED = constants_1.CLUSTER_TIME_RECEIVED;
/** @event */
Connection.CLOSE = constants_1.CLOSE;
/** @event */
Connection.MESSAGE = constants_1.MESSAGE;
/** @event */
Connection.PINNED = constants_1.PINNED;
/** @event */
Connection.UNPINNED = constants_1.UNPINNED;
/** @internal */
class CryptoConnection extends Connection {
constructor(stream, options) {
super(stream, options);
this[kAutoEncrypter] = options.autoEncrypter;
}
/** @internal @override */
command(ns, cmd, options, callback) {
const autoEncrypter = this[kAutoEncrypter];
if (!autoEncrypter) {
return callback(new error_1.MongoMissingDependencyError('No AutoEncrypter available for encryption'));
}
const serverWireVersion = (0, utils_1.maxWireVersion)(this);
if (serverWireVersion === 0) {
// This means the initial handshake hasn't happened yet
return super.command(ns, cmd, options, callback);
}
if (serverWireVersion < 8) {
callback(new error_1.MongoCompatibilityError('Auto-encryption requires a minimum MongoDB version of 4.2'));
return;
}
// Save sort or indexKeys based on the command being run
// the encrypt API serializes our JS objects to BSON to pass to the native code layer
// and then deserializes the encrypted result, the protocol level components
// of the command (ex. sort) are then converted to JS objects potentially losing
// import key order information. These fields are never encrypted so we can save the values
// from before the encryption and replace them after encryption has been performed
const sort = cmd.find || cmd.findAndModify ? cmd.sort : null;
const indexKeys = cmd.createIndexes
? cmd.indexes.map((index) => index.key)
: null;
autoEncrypter.encrypt(ns.toString(), cmd, options, (err, encrypted) => {
if (err || encrypted == null) {
callback(err, null);
return;
}
// Replace the saved values
if (sort != null && (cmd.find || cmd.findAndModify)) {
encrypted.sort = sort;
}
if (indexKeys != null && cmd.createIndexes) {
for (const [offset, index] of indexKeys.entries()) {
encrypted.indexes[offset].key = index;
}
}
super.command(ns, encrypted, options, (err, response) => {
if (err || response == null) {
callback(err, response);
return;
}
autoEncrypter.decrypt(response, options, callback);
});
});
}
}
exports.CryptoConnection = CryptoConnection;
/** @internal */
function hasSessionSupport(conn) {
const description = conn.description;
return description.logicalSessionTimeoutMinutes != null || !!description.loadBalanced;
}
exports.hasSessionSupport = hasSessionSupport;
function supportsOpMsg(conn) {
const description = conn.description;
if (description == null) {
return false;
}
return (0, utils_1.maxWireVersion)(conn) >= 6 && !description.__nodejs_mock_server__;
}
function streamIdentifier(stream, options) {
if (options.proxyHost) {
// If proxy options are specified, the properties of `stream` itself
// will not accurately reflect what endpoint this is connected to.
return options.hostAddress.toString();
}
const { remoteAddress, remotePort } = stream;
if (typeof remoteAddress === 'string' && typeof remotePort === 'number') {
return utils_1.HostAddress.fromHostPort(remoteAddress, remotePort).toString();
}
return (0, utils_1.uuidV4)().toString('hex');
}
function write(conn, command, options, callback) {
options = options ?? {};
const operationDescription = {
requestId: command.requestId,
cb: callback,
session: options.session,
noResponse: typeof options.noResponse === 'boolean' ? options.noResponse : false,
documentsReturnedIn: options.documentsReturnedIn,
command: !!options.command,
// for BSON parsing
useBigInt64: typeof options.useBigInt64 === 'boolean' ? options.useBigInt64 : false,
promoteLongs: typeof options.promoteLongs === 'boolean' ? options.promoteLongs : true,
promoteValues: typeof options.promoteValues === 'boolean' ? options.promoteValues : true,
promoteBuffers: typeof options.promoteBuffers === 'boolean' ? options.promoteBuffers : false,
bsonRegExp: typeof options.bsonRegExp === 'boolean' ? options.bsonRegExp : false,
enableUtf8Validation: typeof options.enableUtf8Validation === 'boolean' ? options.enableUtf8Validation : true,
raw: typeof options.raw === 'boolean' ? options.raw : false,
started: 0
};
if (conn[kDescription] && conn[kDescription].compressor) {
operationDescription.agreedCompressor = conn[kDescription].compressor;
if (conn[kDescription].zlibCompressionLevel) {
operationDescription.zlibCompressionLevel = conn[kDescription].zlibCompressionLevel;
}
}
if (typeof options.socketTimeoutMS === 'number') {
operationDescription.socketTimeoutOverride = true;
conn[kStream].setTimeout(options.socketTimeoutMS);
}
// if command monitoring is enabled we need to modify the callback here
if (conn.monitorCommands) {
conn.emit(Connection.COMMAND_STARTED, new command_monitoring_events_1.CommandStartedEvent(conn, command));
operationDescription.started = (0, utils_1.now)();
operationDescription.cb = (err, reply) => {
// Command monitoring spec states that if ok is 1, then we must always emit
// a command suceeded event, even if there's an error. Write concern errors
// will have an ok: 1 in their reply.
if (err && reply?.ok !== 1) {
conn.emit(Connection.COMMAND_FAILED, new command_monitoring_events_1.CommandFailedEvent(conn, command, err, operationDescription.started));
}
else {
if (reply && (reply.ok === 0 || reply.$err)) {
conn.emit(Connection.COMMAND_FAILED, new command_monitoring_events_1.CommandFailedEvent(conn, command, reply, operationDescription.started));
}
else {
conn.emit(Connection.COMMAND_SUCCEEDED, new command_monitoring_events_1.CommandSucceededEvent(conn, command, reply, operationDescription.started));
}
}
if (typeof callback === 'function') {
// Since we're passing through the reply with the write concern error now, we
// need it not to be provided to the original callback in this case so
// retryability does not get tricked into thinking the command actually
// succeeded.
callback(err, err instanceof error_1.MongoWriteConcernError ? undefined : reply);
}
};
}
if (!operationDescription.noResponse) {
conn[kQueue].set(operationDescription.requestId, operationDescription);
}
try {
conn[kMessageStream].writeCommand(command, operationDescription);
}
catch (e) {
if (!operationDescription.noResponse) {
conn[kQueue].delete(operationDescription.requestId);
operationDescription.cb(e);
return;
}
}
if (operationDescription.noResponse) {
operationDescription.cb();
}
}
//# sourceMappingURL=connection.js.map

1
node_modules/mongodb/lib/cmap/connection.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

591
node_modules/mongodb/lib/cmap/connection_pool.js generated vendored Normal file
View file

@ -0,0 +1,591 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.ConnectionPool = exports.PoolState = void 0;
const timers_1 = require("timers");
const constants_1 = require("../constants");
const error_1 = require("../error");
const mongo_types_1 = require("../mongo_types");
const utils_1 = require("../utils");
const connect_1 = require("./connect");
const connection_1 = require("./connection");
const connection_pool_events_1 = require("./connection_pool_events");
const errors_1 = require("./errors");
const metrics_1 = require("./metrics");
/** @internal */
const kServer = Symbol('server');
/** @internal */
const kConnections = Symbol('connections');
/** @internal */
const kPending = Symbol('pending');
/** @internal */
const kCheckedOut = Symbol('checkedOut');
/** @internal */
const kMinPoolSizeTimer = Symbol('minPoolSizeTimer');
/** @internal */
const kGeneration = Symbol('generation');
/** @internal */
const kServiceGenerations = Symbol('serviceGenerations');
/** @internal */
const kConnectionCounter = Symbol('connectionCounter');
/** @internal */
const kCancellationToken = Symbol('cancellationToken');
/** @internal */
const kWaitQueue = Symbol('waitQueue');
/** @internal */
const kCancelled = Symbol('cancelled');
/** @internal */
const kMetrics = Symbol('metrics');
/** @internal */
const kProcessingWaitQueue = Symbol('processingWaitQueue');
/** @internal */
const kPoolState = Symbol('poolState');
/** @internal */
exports.PoolState = Object.freeze({
paused: 'paused',
ready: 'ready',
closed: 'closed'
});
/**
* A pool of connections which dynamically resizes, and emit events related to pool activity
* @internal
*/
class ConnectionPool extends mongo_types_1.TypedEventEmitter {
constructor(server, options) {
super();
this.options = Object.freeze({
...options,
connectionType: connection_1.Connection,
maxPoolSize: options.maxPoolSize ?? 100,
minPoolSize: options.minPoolSize ?? 0,
maxConnecting: options.maxConnecting ?? 2,
maxIdleTimeMS: options.maxIdleTimeMS ?? 0,
waitQueueTimeoutMS: options.waitQueueTimeoutMS ?? 0,
minPoolSizeCheckFrequencyMS: options.minPoolSizeCheckFrequencyMS ?? 100,
autoEncrypter: options.autoEncrypter,
metadata: options.metadata
});
if (this.options.minPoolSize > this.options.maxPoolSize) {
throw new error_1.MongoInvalidArgumentError('Connection pool minimum size must not be greater than maximum pool size');
}
this[kPoolState] = exports.PoolState.paused;
this[kServer] = server;
this[kConnections] = new utils_1.List();
this[kPending] = 0;
this[kCheckedOut] = new Set();
this[kMinPoolSizeTimer] = undefined;
this[kGeneration] = 0;
this[kServiceGenerations] = new Map();
this[kConnectionCounter] = (0, utils_1.makeCounter)(1);
this[kCancellationToken] = new mongo_types_1.CancellationToken();
this[kCancellationToken].setMaxListeners(Infinity);
this[kWaitQueue] = new utils_1.List();
this[kMetrics] = new metrics_1.ConnectionPoolMetrics();
this[kProcessingWaitQueue] = false;
process.nextTick(() => {
this.emit(ConnectionPool.CONNECTION_POOL_CREATED, new connection_pool_events_1.ConnectionPoolCreatedEvent(this));
});
}
/** The address of the endpoint the pool is connected to */
get address() {
return this.options.hostAddress.toString();
}
/**
* Check if the pool has been closed
*
* TODO(NODE-3263): We can remove this property once shell no longer needs it
*/
get closed() {
return this[kPoolState] === exports.PoolState.closed;
}
/** An integer representing the SDAM generation of the pool */
get generation() {
return this[kGeneration];
}
/** An integer expressing how many total connections (available + pending + in use) the pool currently has */
get totalConnectionCount() {
return (this.availableConnectionCount + this.pendingConnectionCount + this.currentCheckedOutCount);
}
/** An integer expressing how many connections are currently available in the pool. */
get availableConnectionCount() {
return this[kConnections].length;
}
get pendingConnectionCount() {
return this[kPending];
}
get currentCheckedOutCount() {
return this[kCheckedOut].size;
}
get waitQueueSize() {
return this[kWaitQueue].length;
}
get loadBalanced() {
return this.options.loadBalanced;
}
get serviceGenerations() {
return this[kServiceGenerations];
}
get serverError() {
return this[kServer].description.error;
}
/**
* This is exposed ONLY for use in mongosh, to enable
* killing all connections if a user quits the shell with
* operations in progress.
*
* This property may be removed as a part of NODE-3263.
*/
get checkedOutConnections() {
return this[kCheckedOut];
}
/**
* Get the metrics information for the pool when a wait queue timeout occurs.
*/
waitQueueErrorMetrics() {
return this[kMetrics].info(this.options.maxPoolSize);
}
/**
* Set the pool state to "ready"
*/
ready() {
if (this[kPoolState] !== exports.PoolState.paused) {
return;
}
this[kPoolState] = exports.PoolState.ready;
this.emit(ConnectionPool.CONNECTION_POOL_READY, new connection_pool_events_1.ConnectionPoolReadyEvent(this));
(0, timers_1.clearTimeout)(this[kMinPoolSizeTimer]);
this.ensureMinPoolSize();
}
/**
* Check a connection out of this pool. The connection will continue to be tracked, but no reference to it
* will be held by the pool. This means that if a connection is checked out it MUST be checked back in or
* explicitly destroyed by the new owner.
*/
checkOut(callback) {
this.emit(ConnectionPool.CONNECTION_CHECK_OUT_STARTED, new connection_pool_events_1.ConnectionCheckOutStartedEvent(this));
const waitQueueMember = { callback };
const waitQueueTimeoutMS = this.options.waitQueueTimeoutMS;
if (waitQueueTimeoutMS) {
waitQueueMember.timer = (0, timers_1.setTimeout)(() => {
waitQueueMember[kCancelled] = true;
waitQueueMember.timer = undefined;
this.emit(ConnectionPool.CONNECTION_CHECK_OUT_FAILED, new connection_pool_events_1.ConnectionCheckOutFailedEvent(this, 'timeout'));
waitQueueMember.callback(new errors_1.WaitQueueTimeoutError(this.loadBalanced
? this.waitQueueErrorMetrics()
: 'Timed out while checking out a connection from connection pool', this.address));
}, waitQueueTimeoutMS);
}
this[kWaitQueue].push(waitQueueMember);
process.nextTick(() => this.processWaitQueue());
}
/**
* Check a connection into the pool.
*
* @param connection - The connection to check in
*/
checkIn(connection) {
if (!this[kCheckedOut].has(connection)) {
return;
}
const poolClosed = this.closed;
const stale = this.connectionIsStale(connection);
const willDestroy = !!(poolClosed || stale || connection.closed);
if (!willDestroy) {
connection.markAvailable();
this[kConnections].unshift(connection);
}
this[kCheckedOut].delete(connection);
this.emit(ConnectionPool.CONNECTION_CHECKED_IN, new connection_pool_events_1.ConnectionCheckedInEvent(this, connection));
if (willDestroy) {
const reason = connection.closed ? 'error' : poolClosed ? 'poolClosed' : 'stale';
this.destroyConnection(connection, reason);
}
process.nextTick(() => this.processWaitQueue());
}
/**
* Clear the pool
*
* Pool reset is handled by incrementing the pool's generation count. Any existing connection of a
* previous generation will eventually be pruned during subsequent checkouts.
*/
clear(options = {}) {
if (this.closed) {
return;
}
// handle load balanced case
if (this.loadBalanced) {
const { serviceId } = options;
if (!serviceId) {
throw new error_1.MongoRuntimeError('ConnectionPool.clear() called in load balanced mode with no serviceId.');
}
const sid = serviceId.toHexString();
const generation = this.serviceGenerations.get(sid);
// Only need to worry if the generation exists, since it should
// always be there but typescript needs the check.
if (generation == null) {
throw new error_1.MongoRuntimeError('Service generations are required in load balancer mode.');
}
else {
// Increment the generation for the service id.
this.serviceGenerations.set(sid, generation + 1);
}
this.emit(ConnectionPool.CONNECTION_POOL_CLEARED, new connection_pool_events_1.ConnectionPoolClearedEvent(this, { serviceId }));
return;
}
// handle non load-balanced case
const interruptInUseConnections = options.interruptInUseConnections ?? false;
const oldGeneration = this[kGeneration];
this[kGeneration] += 1;
const alreadyPaused = this[kPoolState] === exports.PoolState.paused;
this[kPoolState] = exports.PoolState.paused;
this.clearMinPoolSizeTimer();
if (!alreadyPaused) {
this.emit(ConnectionPool.CONNECTION_POOL_CLEARED, new connection_pool_events_1.ConnectionPoolClearedEvent(this, { interruptInUseConnections }));
}
if (interruptInUseConnections) {
process.nextTick(() => this.interruptInUseConnections(oldGeneration));
}
this.processWaitQueue();
}
/**
* Closes all stale in-use connections in the pool with a resumable PoolClearedOnNetworkError.
*
* Only connections where `connection.generation <= minGeneration` are killed.
*/
interruptInUseConnections(minGeneration) {
for (const connection of this[kCheckedOut]) {
if (connection.generation <= minGeneration) {
this.checkIn(connection);
connection.onError(new errors_1.PoolClearedOnNetworkError(this));
}
}
}
close(_options, _cb) {
let options = _options;
const callback = (_cb ?? _options);
if (typeof options === 'function') {
options = {};
}
options = Object.assign({ force: false }, options);
if (this.closed) {
return callback();
}
// immediately cancel any in-flight connections
this[kCancellationToken].emit('cancel');
// end the connection counter
if (typeof this[kConnectionCounter].return === 'function') {
this[kConnectionCounter].return(undefined);
}
this[kPoolState] = exports.PoolState.closed;
this.clearMinPoolSizeTimer();
this.processWaitQueue();
(0, utils_1.eachAsync)(this[kConnections].toArray(), (conn, cb) => {
this.emit(ConnectionPool.CONNECTION_CLOSED, new connection_pool_events_1.ConnectionClosedEvent(this, conn, 'poolClosed'));
conn.destroy({ force: !!options.force }, cb);
}, err => {
this[kConnections].clear();
this.emit(ConnectionPool.CONNECTION_POOL_CLOSED, new connection_pool_events_1.ConnectionPoolClosedEvent(this));
callback(err);
});
}
/**
* Runs a lambda with an implicitly checked out connection, checking that connection back in when the lambda
* has completed by calling back.
*
* NOTE: please note the required signature of `fn`
*
* @remarks When in load balancer mode, connections can be pinned to cursors or transactions.
* In these cases we pass the connection in to this method to ensure it is used and a new
* connection is not checked out.
*
* @param conn - A pinned connection for use in load balancing mode.
* @param fn - A function which operates on a managed connection
* @param callback - The original callback
*/
withConnection(conn, fn, callback) {
if (conn) {
// use the provided connection, and do _not_ check it in after execution
fn(undefined, conn, (fnErr, result) => {
if (typeof callback === 'function') {
if (fnErr) {
callback(fnErr);
}
else {
callback(undefined, result);
}
}
});
return;
}
this.checkOut((err, conn) => {
// don't callback with `err` here, we might want to act upon it inside `fn`
fn(err, conn, (fnErr, result) => {
if (typeof callback === 'function') {
if (fnErr) {
callback(fnErr);
}
else {
callback(undefined, result);
}
}
if (conn) {
this.checkIn(conn);
}
});
});
}
/** Clear the min pool size timer */
clearMinPoolSizeTimer() {
const minPoolSizeTimer = this[kMinPoolSizeTimer];
if (minPoolSizeTimer) {
(0, timers_1.clearTimeout)(minPoolSizeTimer);
}
}
destroyConnection(connection, reason) {
this.emit(ConnectionPool.CONNECTION_CLOSED, new connection_pool_events_1.ConnectionClosedEvent(this, connection, reason));
// destroy the connection
process.nextTick(() => connection.destroy({ force: false }));
}
connectionIsStale(connection) {
const serviceId = connection.serviceId;
if (this.loadBalanced && serviceId) {
const sid = serviceId.toHexString();
const generation = this.serviceGenerations.get(sid);
return connection.generation !== generation;
}
return connection.generation !== this[kGeneration];
}
connectionIsIdle(connection) {
return !!(this.options.maxIdleTimeMS && connection.idleTime > this.options.maxIdleTimeMS);
}
/**
* Destroys a connection if the connection is perished.
*
* @returns `true` if the connection was destroyed, `false` otherwise.
*/
destroyConnectionIfPerished(connection) {
const isStale = this.connectionIsStale(connection);
const isIdle = this.connectionIsIdle(connection);
if (!isStale && !isIdle && !connection.closed) {
return false;
}
const reason = connection.closed ? 'error' : isStale ? 'stale' : 'idle';
this.destroyConnection(connection, reason);
return true;
}
createConnection(callback) {
const connectOptions = {
...this.options,
id: this[kConnectionCounter].next().value,
generation: this[kGeneration],
cancellationToken: this[kCancellationToken]
};
this[kPending]++;
// This is our version of a "virtual" no-I/O connection as the spec requires
this.emit(ConnectionPool.CONNECTION_CREATED, new connection_pool_events_1.ConnectionCreatedEvent(this, { id: connectOptions.id }));
(0, connect_1.connect)(connectOptions, (err, connection) => {
if (err || !connection) {
this[kPending]--;
this.emit(ConnectionPool.CONNECTION_CLOSED, new connection_pool_events_1.ConnectionClosedEvent(this, { id: connectOptions.id, serviceId: undefined }, 'error'));
if (err instanceof error_1.MongoNetworkError || err instanceof error_1.MongoServerError) {
err.connectionGeneration = connectOptions.generation;
}
callback(err ?? new error_1.MongoRuntimeError('Connection creation failed without error'));
return;
}
// The pool might have closed since we started trying to create a connection
if (this[kPoolState] !== exports.PoolState.ready) {
this[kPending]--;
connection.destroy({ force: true });
callback(this.closed ? new errors_1.PoolClosedError(this) : new errors_1.PoolClearedError(this));
return;
}
// forward all events from the connection to the pool
for (const event of [...constants_1.APM_EVENTS, connection_1.Connection.CLUSTER_TIME_RECEIVED]) {
connection.on(event, (e) => this.emit(event, e));
}
if (this.loadBalanced) {
connection.on(connection_1.Connection.PINNED, pinType => this[kMetrics].markPinned(pinType));
connection.on(connection_1.Connection.UNPINNED, pinType => this[kMetrics].markUnpinned(pinType));
const serviceId = connection.serviceId;
if (serviceId) {
let generation;
const sid = serviceId.toHexString();
if ((generation = this.serviceGenerations.get(sid))) {
connection.generation = generation;
}
else {
this.serviceGenerations.set(sid, 0);
connection.generation = 0;
}
}
}
connection.markAvailable();
this.emit(ConnectionPool.CONNECTION_READY, new connection_pool_events_1.ConnectionReadyEvent(this, connection));
this[kPending]--;
callback(undefined, connection);
return;
});
}
ensureMinPoolSize() {
const minPoolSize = this.options.minPoolSize;
if (this[kPoolState] !== exports.PoolState.ready || minPoolSize === 0) {
return;
}
this[kConnections].prune(connection => this.destroyConnectionIfPerished(connection));
if (this.totalConnectionCount < minPoolSize &&
this.pendingConnectionCount < this.options.maxConnecting) {
// NOTE: ensureMinPoolSize should not try to get all the pending
// connection permits because that potentially delays the availability of
// the connection to a checkout request
this.createConnection((err, connection) => {
if (err) {
this[kServer].handleError(err);
}
if (!err && connection) {
this[kConnections].push(connection);
process.nextTick(() => this.processWaitQueue());
}
if (this[kPoolState] === exports.PoolState.ready) {
(0, timers_1.clearTimeout)(this[kMinPoolSizeTimer]);
this[kMinPoolSizeTimer] = (0, timers_1.setTimeout)(() => this.ensureMinPoolSize(), this.options.minPoolSizeCheckFrequencyMS);
}
});
}
else {
(0, timers_1.clearTimeout)(this[kMinPoolSizeTimer]);
this[kMinPoolSizeTimer] = (0, timers_1.setTimeout)(() => this.ensureMinPoolSize(), this.options.minPoolSizeCheckFrequencyMS);
}
}
processWaitQueue() {
if (this[kProcessingWaitQueue]) {
return;
}
this[kProcessingWaitQueue] = true;
while (this.waitQueueSize) {
const waitQueueMember = this[kWaitQueue].first();
if (!waitQueueMember) {
this[kWaitQueue].shift();
continue;
}
if (waitQueueMember[kCancelled]) {
this[kWaitQueue].shift();
continue;
}
if (this[kPoolState] !== exports.PoolState.ready) {
const reason = this.closed ? 'poolClosed' : 'connectionError';
const error = this.closed ? new errors_1.PoolClosedError(this) : new errors_1.PoolClearedError(this);
this.emit(ConnectionPool.CONNECTION_CHECK_OUT_FAILED, new connection_pool_events_1.ConnectionCheckOutFailedEvent(this, reason));
if (waitQueueMember.timer) {
(0, timers_1.clearTimeout)(waitQueueMember.timer);
}
this[kWaitQueue].shift();
waitQueueMember.callback(error);
continue;
}
if (!this.availableConnectionCount) {
break;
}
const connection = this[kConnections].shift();
if (!connection) {
break;
}
if (!this.destroyConnectionIfPerished(connection)) {
this[kCheckedOut].add(connection);
this.emit(ConnectionPool.CONNECTION_CHECKED_OUT, new connection_pool_events_1.ConnectionCheckedOutEvent(this, connection));
if (waitQueueMember.timer) {
(0, timers_1.clearTimeout)(waitQueueMember.timer);
}
this[kWaitQueue].shift();
waitQueueMember.callback(undefined, connection);
}
}
const { maxPoolSize, maxConnecting } = this.options;
while (this.waitQueueSize > 0 &&
this.pendingConnectionCount < maxConnecting &&
(maxPoolSize === 0 || this.totalConnectionCount < maxPoolSize)) {
const waitQueueMember = this[kWaitQueue].shift();
if (!waitQueueMember || waitQueueMember[kCancelled]) {
continue;
}
this.createConnection((err, connection) => {
if (waitQueueMember[kCancelled]) {
if (!err && connection) {
this[kConnections].push(connection);
}
}
else {
if (err) {
this.emit(ConnectionPool.CONNECTION_CHECK_OUT_FAILED, new connection_pool_events_1.ConnectionCheckOutFailedEvent(this, 'connectionError'));
}
else if (connection) {
this[kCheckedOut].add(connection);
this.emit(ConnectionPool.CONNECTION_CHECKED_OUT, new connection_pool_events_1.ConnectionCheckedOutEvent(this, connection));
}
if (waitQueueMember.timer) {
(0, timers_1.clearTimeout)(waitQueueMember.timer);
}
waitQueueMember.callback(err, connection);
}
process.nextTick(() => this.processWaitQueue());
});
}
this[kProcessingWaitQueue] = false;
}
}
exports.ConnectionPool = ConnectionPool;
/**
* Emitted when the connection pool is created.
* @event
*/
ConnectionPool.CONNECTION_POOL_CREATED = constants_1.CONNECTION_POOL_CREATED;
/**
* Emitted once when the connection pool is closed
* @event
*/
ConnectionPool.CONNECTION_POOL_CLOSED = constants_1.CONNECTION_POOL_CLOSED;
/**
* Emitted each time the connection pool is cleared and it's generation incremented
* @event
*/
ConnectionPool.CONNECTION_POOL_CLEARED = constants_1.CONNECTION_POOL_CLEARED;
/**
* Emitted each time the connection pool is marked ready
* @event
*/
ConnectionPool.CONNECTION_POOL_READY = constants_1.CONNECTION_POOL_READY;
/**
* Emitted when a connection is created.
* @event
*/
ConnectionPool.CONNECTION_CREATED = constants_1.CONNECTION_CREATED;
/**
* Emitted when a connection becomes established, and is ready to use
* @event
*/
ConnectionPool.CONNECTION_READY = constants_1.CONNECTION_READY;
/**
* Emitted when a connection is closed
* @event
*/
ConnectionPool.CONNECTION_CLOSED = constants_1.CONNECTION_CLOSED;
/**
* Emitted when an attempt to check out a connection begins
* @event
*/
ConnectionPool.CONNECTION_CHECK_OUT_STARTED = constants_1.CONNECTION_CHECK_OUT_STARTED;
/**
* Emitted when an attempt to check out a connection fails
* @event
*/
ConnectionPool.CONNECTION_CHECK_OUT_FAILED = constants_1.CONNECTION_CHECK_OUT_FAILED;
/**
* Emitted each time a connection is successfully checked out of the connection pool
* @event
*/
ConnectionPool.CONNECTION_CHECKED_OUT = constants_1.CONNECTION_CHECKED_OUT;
/**
* Emitted each time a connection is successfully checked into the connection pool
* @event
*/
ConnectionPool.CONNECTION_CHECKED_IN = constants_1.CONNECTION_CHECKED_IN;
//# sourceMappingURL=connection_pool.js.map

1
node_modules/mongodb/lib/cmap/connection_pool.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

160
node_modules/mongodb/lib/cmap/connection_pool_events.js generated vendored Normal file
View file

@ -0,0 +1,160 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.ConnectionPoolClearedEvent = exports.ConnectionCheckedInEvent = exports.ConnectionCheckedOutEvent = exports.ConnectionCheckOutFailedEvent = exports.ConnectionCheckOutStartedEvent = exports.ConnectionClosedEvent = exports.ConnectionReadyEvent = exports.ConnectionCreatedEvent = exports.ConnectionPoolClosedEvent = exports.ConnectionPoolReadyEvent = exports.ConnectionPoolCreatedEvent = exports.ConnectionPoolMonitoringEvent = void 0;
/**
* The base export class for all monitoring events published from the connection pool
* @public
* @category Event
*/
class ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool) {
this.time = new Date();
this.address = pool.address;
}
}
exports.ConnectionPoolMonitoringEvent = ConnectionPoolMonitoringEvent;
/**
* An event published when a connection pool is created
* @public
* @category Event
*/
class ConnectionPoolCreatedEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool) {
super(pool);
this.options = pool.options;
}
}
exports.ConnectionPoolCreatedEvent = ConnectionPoolCreatedEvent;
/**
* An event published when a connection pool is ready
* @public
* @category Event
*/
class ConnectionPoolReadyEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool) {
super(pool);
}
}
exports.ConnectionPoolReadyEvent = ConnectionPoolReadyEvent;
/**
* An event published when a connection pool is closed
* @public
* @category Event
*/
class ConnectionPoolClosedEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool) {
super(pool);
}
}
exports.ConnectionPoolClosedEvent = ConnectionPoolClosedEvent;
/**
* An event published when a connection pool creates a new connection
* @public
* @category Event
*/
class ConnectionCreatedEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool, connection) {
super(pool);
this.connectionId = connection.id;
}
}
exports.ConnectionCreatedEvent = ConnectionCreatedEvent;
/**
* An event published when a connection is ready for use
* @public
* @category Event
*/
class ConnectionReadyEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool, connection) {
super(pool);
this.connectionId = connection.id;
}
}
exports.ConnectionReadyEvent = ConnectionReadyEvent;
/**
* An event published when a connection is closed
* @public
* @category Event
*/
class ConnectionClosedEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool, connection, reason) {
super(pool);
this.connectionId = connection.id;
this.reason = reason || 'unknown';
this.serviceId = connection.serviceId;
}
}
exports.ConnectionClosedEvent = ConnectionClosedEvent;
/**
* An event published when a request to check a connection out begins
* @public
* @category Event
*/
class ConnectionCheckOutStartedEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool) {
super(pool);
}
}
exports.ConnectionCheckOutStartedEvent = ConnectionCheckOutStartedEvent;
/**
* An event published when a request to check a connection out fails
* @public
* @category Event
*/
class ConnectionCheckOutFailedEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool, reason) {
super(pool);
this.reason = reason;
}
}
exports.ConnectionCheckOutFailedEvent = ConnectionCheckOutFailedEvent;
/**
* An event published when a connection is checked out of the connection pool
* @public
* @category Event
*/
class ConnectionCheckedOutEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool, connection) {
super(pool);
this.connectionId = connection.id;
}
}
exports.ConnectionCheckedOutEvent = ConnectionCheckedOutEvent;
/**
* An event published when a connection is checked into the connection pool
* @public
* @category Event
*/
class ConnectionCheckedInEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool, connection) {
super(pool);
this.connectionId = connection.id;
}
}
exports.ConnectionCheckedInEvent = ConnectionCheckedInEvent;
/**
* An event published when a connection pool is cleared
* @public
* @category Event
*/
class ConnectionPoolClearedEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool, options = {}) {
super(pool);
this.serviceId = options.serviceId;
this.interruptInUseConnections = options.interruptInUseConnections;
}
}
exports.ConnectionPoolClearedEvent = ConnectionPoolClearedEvent;
//# sourceMappingURL=connection_pool_events.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"connection_pool_events.js","sourceRoot":"","sources":["../../src/cmap/connection_pool_events.ts"],"names":[],"mappings":";;;AAKA;;;;GAIG;AACH,MAAa,6BAA6B;IAMxC,gBAAgB;IAChB,YAAY,IAAoB;QAC9B,IAAI,CAAC,IAAI,GAAG,IAAI,IAAI,EAAE,CAAC;QACvB,IAAI,CAAC,OAAO,GAAG,IAAI,CAAC,OAAO,CAAC;IAC9B,CAAC;CACF;AAXD,sEAWC;AAED;;;;GAIG;AACH,MAAa,0BAA2B,SAAQ,6BAA6B;IAI3E,gBAAgB;IAChB,YAAY,IAAoB;QAC9B,KAAK,CAAC,IAAI,CAAC,CAAC;QACZ,IAAI,CAAC,OAAO,GAAG,IAAI,CAAC,OAAO,CAAC;IAC9B,CAAC;CACF;AATD,gEASC;AAED;;;;GAIG;AACH,MAAa,wBAAyB,SAAQ,6BAA6B;IACzE,gBAAgB;IAChB,YAAY,IAAoB;QAC9B,KAAK,CAAC,IAAI,CAAC,CAAC;IACd,CAAC;CACF;AALD,4DAKC;AAED;;;;GAIG;AACH,MAAa,yBAA0B,SAAQ,6BAA6B;IAC1E,gBAAgB;IAChB,YAAY,IAAoB;QAC9B,KAAK,CAAC,IAAI,CAAC,CAAC;IACd,CAAC;CACF;AALD,8DAKC;AAED;;;;GAIG;AACH,MAAa,sBAAuB,SAAQ,6BAA6B;IAIvE,gBAAgB;IAChB,YAAY,IAAoB,EAAE,UAAwC;QACxE,KAAK,CAAC,IAAI,CAAC,CAAC;QACZ,IAAI,CAAC,YAAY,GAAG,UAAU,CAAC,EAAE,CAAC;IACpC,CAAC;CACF;AATD,wDASC;AAED;;;;GAIG;AACH,MAAa,oBAAqB,SAAQ,6BAA6B;IAIrE,gBAAgB;IAChB,YAAY,IAAoB,EAAE,UAAsB;QACtD,KAAK,CAAC,IAAI,CAAC,CAAC;QACZ,IAAI,CAAC,YAAY,GAAG,UAAU,CAAC,EAAE,CAAC;IACpC,CAAC;CACF;AATD,oDASC;AAED;;;;GAIG;AACH,MAAa,qBAAsB,SAAQ,6BAA6B;IAOtE,gBAAgB;IAChB,YACE,IAAoB,EACpB,UAAgD,EAChD,MAAc;QAEd,KAAK,CAAC,IAAI,CAAC,CAAC;QACZ,IAAI,CAAC,YAAY,GAAG,UAAU,CAAC,EAAE,CAAC;QAClC,IAAI,CAAC,MAAM,GAAG,MAAM,IAAI,SAAS,CAAC;QAClC,IAAI,CAAC,SAAS,GAAG,UAAU,CAAC,SAAS,CAAC;IACxC,CAAC;CACF;AAlBD,sDAkBC;AAED;;;;GAIG;AACH,MAAa,8BAA+B,SAAQ,6BAA6B;IAC/E,gBAAgB;IAChB,YAAY,IAAoB;QAC9B,KAAK,CAAC,IAAI,CAAC,CAAC;IACd,CAAC;CACF;AALD,wEAKC;AAED;;;;GAIG;AACH,MAAa,6BAA8B,SAAQ,6BAA6B;IAI9E,gBAAgB;IAChB,YAAY,IAAoB,EAAE,MAAyB;QACzD,KAAK,CAAC,IAAI,CAAC,CAAC;QACZ,IAAI,CAAC,MAAM,GAAG,MAAM,CAAC;IACvB,CAAC;CACF;AATD,sEASC;AAED;;;;GAIG;AACH,MAAa,yBAA0B,SAAQ,6BAA6B;IAI1E,gBAAgB;IAChB,YAAY,IAAoB,EAAE,UAAsB;QACtD,KAAK,CAAC,IAAI,CAAC,CAAC;QACZ,IAAI,CAAC,YAAY,GAAG,UAAU,CAAC,EAAE,CAAC;IACpC,CAAC;CACF;AATD,8DASC;AAED;;;;GAIG;AACH,MAAa,wBAAyB,SAAQ,6BAA6B;IAIzE,gBAAgB;IAChB,YAAY,IAAoB,EAAE,UAAsB;QACtD,KAAK,CAAC,IAAI,CAAC,CAAC;QACZ,IAAI,CAAC,YAAY,GAAG,UAAU,CAAC,EAAE,CAAC;IACpC,CAAC;CACF;AATD,4DASC;AAED;;;;GAIG;AACH,MAAa,0BAA2B,SAAQ,6BAA6B;IAM3E,gBAAgB;IAChB,YACE,IAAoB,EACpB,UAAyE,EAAE;QAE3E,KAAK,CAAC,IAAI,CAAC,CAAC;QACZ,IAAI,CAAC,SAAS,GAAG,OAAO,CAAC,SAAS,CAAC;QACnC,IAAI,CAAC,yBAAyB,GAAG,OAAO,CAAC,yBAAyB,CAAC;IACrE,CAAC;CACF;AAfD,gEAeC"}

64
node_modules/mongodb/lib/cmap/errors.js generated vendored Normal file
View file

@ -0,0 +1,64 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.WaitQueueTimeoutError = exports.PoolClearedOnNetworkError = exports.PoolClearedError = exports.PoolClosedError = void 0;
const error_1 = require("../error");
/**
* An error indicating a connection pool is closed
* @category Error
*/
class PoolClosedError extends error_1.MongoDriverError {
constructor(pool) {
super('Attempted to check out a connection from closed connection pool');
this.address = pool.address;
}
get name() {
return 'MongoPoolClosedError';
}
}
exports.PoolClosedError = PoolClosedError;
/**
* An error indicating a connection pool is currently paused
* @category Error
*/
class PoolClearedError extends error_1.MongoNetworkError {
constructor(pool, message) {
const errorMessage = message
? message
: `Connection pool for ${pool.address} was cleared because another operation failed with: "${pool.serverError?.message}"`;
super(errorMessage);
this.address = pool.address;
this.addErrorLabel(error_1.MongoErrorLabel.RetryableWriteError);
}
get name() {
return 'MongoPoolClearedError';
}
}
exports.PoolClearedError = PoolClearedError;
/**
* An error indicating that a connection pool has been cleared after the monitor for that server timed out.
* @category Error
*/
class PoolClearedOnNetworkError extends PoolClearedError {
constructor(pool) {
super(pool, `Connection to ${pool.address} interrupted due to server monitor timeout`);
}
get name() {
return 'PoolClearedOnNetworkError';
}
}
exports.PoolClearedOnNetworkError = PoolClearedOnNetworkError;
/**
* An error thrown when a request to check out a connection times out
* @category Error
*/
class WaitQueueTimeoutError extends error_1.MongoDriverError {
constructor(message, address) {
super(message);
this.address = address;
}
get name() {
return 'MongoWaitQueueTimeoutError';
}
}
exports.WaitQueueTimeoutError = WaitQueueTimeoutError;
//# sourceMappingURL=errors.js.map

1
node_modules/mongodb/lib/cmap/errors.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"errors.js","sourceRoot":"","sources":["../../src/cmap/errors.ts"],"names":[],"mappings":";;;AAAA,oCAAgF;AAGhF;;;GAGG;AACH,MAAa,eAAgB,SAAQ,wBAAgB;IAInD,YAAY,IAAoB;QAC9B,KAAK,CAAC,iEAAiE,CAAC,CAAC;QACzE,IAAI,CAAC,OAAO,GAAG,IAAI,CAAC,OAAO,CAAC;IAC9B,CAAC;IAED,IAAa,IAAI;QACf,OAAO,sBAAsB,CAAC;IAChC,CAAC;CACF;AAZD,0CAYC;AAED;;;GAGG;AACH,MAAa,gBAAiB,SAAQ,yBAAiB;IAIrD,YAAY,IAAoB,EAAE,OAAgB;QAChD,MAAM,YAAY,GAAG,OAAO;YAC1B,CAAC,CAAC,OAAO;YACT,CAAC,CAAC,uBAAuB,IAAI,CAAC,OAAO,wDAAwD,IAAI,CAAC,WAAW,EAAE,OAAO,GAAG,CAAC;QAC5H,KAAK,CAAC,YAAY,CAAC,CAAC;QACpB,IAAI,CAAC,OAAO,GAAG,IAAI,CAAC,OAAO,CAAC;QAE5B,IAAI,CAAC,aAAa,CAAC,uBAAe,CAAC,mBAAmB,CAAC,CAAC;IAC1D,CAAC;IAED,IAAa,IAAI;QACf,OAAO,uBAAuB,CAAC;IACjC,CAAC;CACF;AAjBD,4CAiBC;AAED;;;GAGG;AACH,MAAa,yBAA0B,SAAQ,gBAAgB;IAC7D,YAAY,IAAoB;QAC9B,KAAK,CAAC,IAAI,EAAE,iBAAiB,IAAI,CAAC,OAAO,4CAA4C,CAAC,CAAC;IACzF,CAAC;IAED,IAAa,IAAI;QACf,OAAO,2BAA2B,CAAC;IACrC,CAAC;CACF;AARD,8DAQC;AAED;;;GAGG;AACH,MAAa,qBAAsB,SAAQ,wBAAgB;IAIzD,YAAY,OAAe,EAAE,OAAe;QAC1C,KAAK,CAAC,OAAO,CAAC,CAAC;QACf,IAAI,CAAC,OAAO,GAAG,OAAO,CAAC;IACzB,CAAC;IAED,IAAa,IAAI;QACf,OAAO,4BAA4B,CAAC;IACtC,CAAC;CACF;AAZD,sDAYC"}

156
node_modules/mongodb/lib/cmap/message_stream.js generated vendored Normal file
View file

@ -0,0 +1,156 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.MessageStream = void 0;
const stream_1 = require("stream");
const error_1 = require("../error");
const utils_1 = require("../utils");
const commands_1 = require("./commands");
const compression_1 = require("./wire_protocol/compression");
const constants_1 = require("./wire_protocol/constants");
const MESSAGE_HEADER_SIZE = 16;
const COMPRESSION_DETAILS_SIZE = 9; // originalOpcode + uncompressedSize, compressorID
const kDefaultMaxBsonMessageSize = 1024 * 1024 * 16 * 4;
/** @internal */
const kBuffer = Symbol('buffer');
/**
* A duplex stream that is capable of reading and writing raw wire protocol messages, with
* support for optional compression
* @internal
*/
class MessageStream extends stream_1.Duplex {
constructor(options = {}) {
super(options);
/** @internal */
this.isMonitoringConnection = false;
this.maxBsonMessageSize = options.maxBsonMessageSize || kDefaultMaxBsonMessageSize;
this[kBuffer] = new utils_1.BufferPool();
}
get buffer() {
return this[kBuffer];
}
_write(chunk, _, callback) {
this[kBuffer].append(chunk);
processIncomingData(this, callback);
}
_read( /* size */) {
// NOTE: This implementation is empty because we explicitly push data to be read
// when `writeMessage` is called.
return;
}
writeCommand(command, operationDescription) {
const agreedCompressor = operationDescription.agreedCompressor ?? 'none';
if (agreedCompressor === 'none' || !canCompress(command)) {
const data = command.toBin();
this.push(Array.isArray(data) ? Buffer.concat(data) : data);
return;
}
// otherwise, compress the message
const concatenatedOriginalCommandBuffer = Buffer.concat(command.toBin());
const messageToBeCompressed = concatenatedOriginalCommandBuffer.slice(MESSAGE_HEADER_SIZE);
// Extract information needed for OP_COMPRESSED from the uncompressed message
const originalCommandOpCode = concatenatedOriginalCommandBuffer.readInt32LE(12);
const options = {
agreedCompressor,
zlibCompressionLevel: operationDescription.zlibCompressionLevel ?? 0
};
// Compress the message body
(0, compression_1.compress)(options, messageToBeCompressed).then(compressedMessage => {
// Create the msgHeader of OP_COMPRESSED
const msgHeader = Buffer.alloc(MESSAGE_HEADER_SIZE);
msgHeader.writeInt32LE(MESSAGE_HEADER_SIZE + COMPRESSION_DETAILS_SIZE + compressedMessage.length, 0); // messageLength
msgHeader.writeInt32LE(command.requestId, 4); // requestID
msgHeader.writeInt32LE(0, 8); // responseTo (zero)
msgHeader.writeInt32LE(constants_1.OP_COMPRESSED, 12); // opCode
// Create the compression details of OP_COMPRESSED
const compressionDetails = Buffer.alloc(COMPRESSION_DETAILS_SIZE);
compressionDetails.writeInt32LE(originalCommandOpCode, 0); // originalOpcode
compressionDetails.writeInt32LE(messageToBeCompressed.length, 4); // Size of the uncompressed compressedMessage, excluding the MsgHeader
compressionDetails.writeUInt8(compression_1.Compressor[agreedCompressor], 8); // compressorID
this.push(Buffer.concat([msgHeader, compressionDetails, compressedMessage]));
}, error => {
operationDescription.cb(error);
});
}
}
exports.MessageStream = MessageStream;
// Return whether a command contains an uncompressible command term
// Will return true if command contains no uncompressible command terms
function canCompress(command) {
const commandDoc = command instanceof commands_1.Msg ? command.command : command.query;
const commandName = Object.keys(commandDoc)[0];
return !compression_1.uncompressibleCommands.has(commandName);
}
function processIncomingData(stream, callback) {
const buffer = stream[kBuffer];
const sizeOfMessage = buffer.getInt32();
if (sizeOfMessage == null) {
return callback();
}
if (sizeOfMessage < 0) {
return callback(new error_1.MongoParseError(`Invalid message size: ${sizeOfMessage}`));
}
if (sizeOfMessage > stream.maxBsonMessageSize) {
return callback(new error_1.MongoParseError(`Invalid message size: ${sizeOfMessage}, max allowed: ${stream.maxBsonMessageSize}`));
}
if (sizeOfMessage > buffer.length) {
return callback();
}
const message = buffer.read(sizeOfMessage);
const messageHeader = {
length: message.readInt32LE(0),
requestId: message.readInt32LE(4),
responseTo: message.readInt32LE(8),
opCode: message.readInt32LE(12)
};
const monitorHasAnotherHello = () => {
if (stream.isMonitoringConnection) {
// Can we read the next message size?
const sizeOfMessage = buffer.getInt32();
if (sizeOfMessage != null && sizeOfMessage <= buffer.length) {
return true;
}
}
return false;
};
let ResponseType = messageHeader.opCode === constants_1.OP_MSG ? commands_1.BinMsg : commands_1.Response;
if (messageHeader.opCode !== constants_1.OP_COMPRESSED) {
const messageBody = message.subarray(MESSAGE_HEADER_SIZE);
// If we are a monitoring connection message stream and
// there is more in the buffer that can be read, skip processing since we
// want the last hello command response that is in the buffer.
if (monitorHasAnotherHello()) {
return processIncomingData(stream, callback);
}
stream.emit('message', new ResponseType(message, messageHeader, messageBody));
if (buffer.length >= 4) {
return processIncomingData(stream, callback);
}
return callback();
}
messageHeader.fromCompressed = true;
messageHeader.opCode = message.readInt32LE(MESSAGE_HEADER_SIZE);
messageHeader.length = message.readInt32LE(MESSAGE_HEADER_SIZE + 4);
const compressorID = message[MESSAGE_HEADER_SIZE + 8];
const compressedBuffer = message.slice(MESSAGE_HEADER_SIZE + 9);
// recalculate based on wrapped opcode
ResponseType = messageHeader.opCode === constants_1.OP_MSG ? commands_1.BinMsg : commands_1.Response;
(0, compression_1.decompress)(compressorID, compressedBuffer).then(messageBody => {
if (messageBody.length !== messageHeader.length) {
return callback(new error_1.MongoDecompressionError('Message body and message header must be the same length'));
}
// If we are a monitoring connection message stream and
// there is more in the buffer that can be read, skip processing since we
// want the last hello command response that is in the buffer.
if (monitorHasAnotherHello()) {
return processIncomingData(stream, callback);
}
stream.emit('message', new ResponseType(message, messageHeader, messageBody));
if (buffer.length >= 4) {
return processIncomingData(stream, callback);
}
return callback();
}, error => {
return callback(error);
});
}
//# sourceMappingURL=message_stream.js.map

1
node_modules/mongodb/lib/cmap/message_stream.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

62
node_modules/mongodb/lib/cmap/metrics.js generated vendored Normal file
View file

@ -0,0 +1,62 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.ConnectionPoolMetrics = void 0;
/** @internal */
class ConnectionPoolMetrics {
constructor() {
this.txnConnections = 0;
this.cursorConnections = 0;
this.otherConnections = 0;
}
/**
* Mark a connection as pinned for a specific operation.
*/
markPinned(pinType) {
if (pinType === ConnectionPoolMetrics.TXN) {
this.txnConnections += 1;
}
else if (pinType === ConnectionPoolMetrics.CURSOR) {
this.cursorConnections += 1;
}
else {
this.otherConnections += 1;
}
}
/**
* Unmark a connection as pinned for an operation.
*/
markUnpinned(pinType) {
if (pinType === ConnectionPoolMetrics.TXN) {
this.txnConnections -= 1;
}
else if (pinType === ConnectionPoolMetrics.CURSOR) {
this.cursorConnections -= 1;
}
else {
this.otherConnections -= 1;
}
}
/**
* Return information about the cmap metrics as a string.
*/
info(maxPoolSize) {
return ('Timed out while checking out a connection from connection pool: ' +
`maxPoolSize: ${maxPoolSize}, ` +
`connections in use by cursors: ${this.cursorConnections}, ` +
`connections in use by transactions: ${this.txnConnections}, ` +
`connections in use by other operations: ${this.otherConnections}`);
}
/**
* Reset the metrics to the initial values.
*/
reset() {
this.txnConnections = 0;
this.cursorConnections = 0;
this.otherConnections = 0;
}
}
exports.ConnectionPoolMetrics = ConnectionPoolMetrics;
ConnectionPoolMetrics.TXN = 'txn';
ConnectionPoolMetrics.CURSOR = 'cursor';
ConnectionPoolMetrics.OTHER = 'other';
//# sourceMappingURL=metrics.js.map

1
node_modules/mongodb/lib/cmap/metrics.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"metrics.js","sourceRoot":"","sources":["../../src/cmap/metrics.ts"],"names":[],"mappings":";;;AAAA,gBAAgB;AAChB,MAAa,qBAAqB;IAAlC;QAKE,mBAAc,GAAG,CAAC,CAAC;QACnB,sBAAiB,GAAG,CAAC,CAAC;QACtB,qBAAgB,GAAG,CAAC,CAAC;IAiDvB,CAAC;IA/CC;;OAEG;IACH,UAAU,CAAC,OAAe;QACxB,IAAI,OAAO,KAAK,qBAAqB,CAAC,GAAG,EAAE;YACzC,IAAI,CAAC,cAAc,IAAI,CAAC,CAAC;SAC1B;aAAM,IAAI,OAAO,KAAK,qBAAqB,CAAC,MAAM,EAAE;YACnD,IAAI,CAAC,iBAAiB,IAAI,CAAC,CAAC;SAC7B;aAAM;YACL,IAAI,CAAC,gBAAgB,IAAI,CAAC,CAAC;SAC5B;IACH,CAAC;IAED;;OAEG;IACH,YAAY,CAAC,OAAe;QAC1B,IAAI,OAAO,KAAK,qBAAqB,CAAC,GAAG,EAAE;YACzC,IAAI,CAAC,cAAc,IAAI,CAAC,CAAC;SAC1B;aAAM,IAAI,OAAO,KAAK,qBAAqB,CAAC,MAAM,EAAE;YACnD,IAAI,CAAC,iBAAiB,IAAI,CAAC,CAAC;SAC7B;aAAM;YACL,IAAI,CAAC,gBAAgB,IAAI,CAAC,CAAC;SAC5B;IACH,CAAC;IAED;;OAEG;IACH,IAAI,CAAC,WAAmB;QACtB,OAAO,CACL,kEAAkE;YAClE,gBAAgB,WAAW,IAAI;YAC/B,kCAAkC,IAAI,CAAC,iBAAiB,IAAI;YAC5D,uCAAuC,IAAI,CAAC,cAAc,IAAI;YAC9D,2CAA2C,IAAI,CAAC,gBAAgB,EAAE,CACnE,CAAC;IACJ,CAAC;IAED;;OAEG;IACH,KAAK;QACH,IAAI,CAAC,cAAc,GAAG,CAAC,CAAC;QACxB,IAAI,CAAC,iBAAiB,GAAG,CAAC,CAAC;QAC3B,IAAI,CAAC,gBAAgB,GAAG,CAAC,CAAC;IAC5B,CAAC;;AAvDH,sDAwDC;AAvDiB,yBAAG,GAAG,KAAc,CAAC;AACrB,4BAAM,GAAG,QAAiB,CAAC;AAC3B,2BAAK,GAAG,OAAgB,CAAC"}

51
node_modules/mongodb/lib/cmap/stream_description.js generated vendored Normal file
View file

@ -0,0 +1,51 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.StreamDescription = void 0;
const common_1 = require("../sdam/common");
const server_description_1 = require("../sdam/server_description");
const RESPONSE_FIELDS = [
'minWireVersion',
'maxWireVersion',
'maxBsonObjectSize',
'maxMessageSizeBytes',
'maxWriteBatchSize',
'logicalSessionTimeoutMinutes'
];
/** @public */
class StreamDescription {
constructor(address, options) {
this.address = address;
this.type = common_1.ServerType.Unknown;
this.minWireVersion = undefined;
this.maxWireVersion = undefined;
this.maxBsonObjectSize = 16777216;
this.maxMessageSizeBytes = 48000000;
this.maxWriteBatchSize = 100000;
this.logicalSessionTimeoutMinutes = options?.logicalSessionTimeoutMinutes;
this.loadBalanced = !!options?.loadBalanced;
this.compressors =
options && options.compressors && Array.isArray(options.compressors)
? options.compressors
: [];
}
receiveResponse(response) {
if (response == null) {
return;
}
this.type = (0, server_description_1.parseServerType)(response);
for (const field of RESPONSE_FIELDS) {
if (response[field] != null) {
this[field] = response[field];
}
// testing case
if ('__nodejs_mock_server__' in response) {
this.__nodejs_mock_server__ = response['__nodejs_mock_server__'];
}
}
if (response.compression) {
this.compressor = this.compressors.filter(c => response.compression?.includes(c))[0];
}
}
}
exports.StreamDescription = StreamDescription;
//# sourceMappingURL=stream_description.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"stream_description.js","sourceRoot":"","sources":["../../src/cmap/stream_description.ts"],"names":[],"mappings":";;;AACA,2CAA4C;AAC5C,mEAA6D;AAG7D,MAAM,eAAe,GAAG;IACtB,gBAAgB;IAChB,gBAAgB;IAChB,mBAAmB;IACnB,qBAAqB;IACrB,mBAAmB;IACnB,8BAA8B;CACtB,CAAC;AASX,cAAc;AACd,MAAa,iBAAiB;IAiB5B,YAAY,OAAe,EAAE,OAAkC;QAC7D,IAAI,CAAC,OAAO,GAAG,OAAO,CAAC;QACvB,IAAI,CAAC,IAAI,GAAG,mBAAU,CAAC,OAAO,CAAC;QAC/B,IAAI,CAAC,cAAc,GAAG,SAAS,CAAC;QAChC,IAAI,CAAC,cAAc,GAAG,SAAS,CAAC;QAChC,IAAI,CAAC,iBAAiB,GAAG,QAAQ,CAAC;QAClC,IAAI,CAAC,mBAAmB,GAAG,QAAQ,CAAC;QACpC,IAAI,CAAC,iBAAiB,GAAG,MAAM,CAAC;QAChC,IAAI,CAAC,4BAA4B,GAAG,OAAO,EAAE,4BAA4B,CAAC;QAC1E,IAAI,CAAC,YAAY,GAAG,CAAC,CAAC,OAAO,EAAE,YAAY,CAAC;QAC5C,IAAI,CAAC,WAAW;YACd,OAAO,IAAI,OAAO,CAAC,WAAW,IAAI,KAAK,CAAC,OAAO,CAAC,OAAO,CAAC,WAAW,CAAC;gBAClE,CAAC,CAAC,OAAO,CAAC,WAAW;gBACrB,CAAC,CAAC,EAAE,CAAC;IACX,CAAC;IAED,eAAe,CAAC,QAAyB;QACvC,IAAI,QAAQ,IAAI,IAAI,EAAE;YACpB,OAAO;SACR;QACD,IAAI,CAAC,IAAI,GAAG,IAAA,oCAAe,EAAC,QAAQ,CAAC,CAAC;QACtC,KAAK,MAAM,KAAK,IAAI,eAAe,EAAE;YACnC,IAAI,QAAQ,CAAC,KAAK,CAAC,IAAI,IAAI,EAAE;gBAC3B,IAAI,CAAC,KAAK,CAAC,GAAG,QAAQ,CAAC,KAAK,CAAC,CAAC;aAC/B;YAED,eAAe;YACf,IAAI,wBAAwB,IAAI,QAAQ,EAAE;gBACxC,IAAI,CAAC,sBAAsB,GAAG,QAAQ,CAAC,wBAAwB,CAAC,CAAC;aAClE;SACF;QAED,IAAI,QAAQ,CAAC,WAAW,EAAE;YACxB,IAAI,CAAC,UAAU,GAAG,IAAI,CAAC,WAAW,CAAC,MAAM,CAAC,CAAC,CAAC,EAAE,CAAC,QAAQ,CAAC,WAAW,EAAE,QAAQ,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC;SACtF;IACH,CAAC;CACF;AArDD,8CAqDC"}

View file

@ -0,0 +1,81 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.decompress = exports.compress = exports.uncompressibleCommands = exports.Compressor = void 0;
const util_1 = require("util");
const zlib = require("zlib");
const constants_1 = require("../../constants");
const deps_1 = require("../../deps");
const error_1 = require("../../error");
/** @public */
exports.Compressor = Object.freeze({
none: 0,
snappy: 1,
zlib: 2,
zstd: 3
});
exports.uncompressibleCommands = new Set([
constants_1.LEGACY_HELLO_COMMAND,
'saslStart',
'saslContinue',
'getnonce',
'authenticate',
'createUser',
'updateUser',
'copydbSaslStart',
'copydbgetnonce',
'copydb'
]);
const ZSTD_COMPRESSION_LEVEL = 3;
const zlibInflate = (0, util_1.promisify)(zlib.inflate.bind(zlib));
const zlibDeflate = (0, util_1.promisify)(zlib.deflate.bind(zlib));
// Facilitate compressing a message using an agreed compressor
async function compress(options, dataToBeCompressed) {
const zlibOptions = {};
switch (options.agreedCompressor) {
case 'snappy':
if ('kModuleError' in deps_1.Snappy) {
throw deps_1.Snappy['kModuleError'];
}
return deps_1.Snappy.compress(dataToBeCompressed);
case 'zstd':
if ('kModuleError' in deps_1.ZStandard) {
throw deps_1.ZStandard['kModuleError'];
}
return deps_1.ZStandard.compress(dataToBeCompressed, ZSTD_COMPRESSION_LEVEL);
case 'zlib':
if (options.zlibCompressionLevel) {
zlibOptions.level = options.zlibCompressionLevel;
}
return zlibDeflate(dataToBeCompressed, zlibOptions);
default:
throw new error_1.MongoInvalidArgumentError(`Unknown compressor ${options.agreedCompressor} failed to compress`);
}
}
exports.compress = compress;
// Decompress a message using the given compressor
async function decompress(compressorID, compressedData) {
if (compressorID !== exports.Compressor.snappy &&
compressorID !== exports.Compressor.zstd &&
compressorID !== exports.Compressor.zlib &&
compressorID !== exports.Compressor.none) {
throw new error_1.MongoDecompressionError(`Server sent message compressed using an unsupported compressor. (Received compressor ID ${compressorID})`);
}
switch (compressorID) {
case exports.Compressor.snappy:
if ('kModuleError' in deps_1.Snappy) {
throw deps_1.Snappy['kModuleError'];
}
return deps_1.Snappy.uncompress(compressedData, { asBuffer: true });
case exports.Compressor.zstd:
if ('kModuleError' in deps_1.ZStandard) {
throw deps_1.ZStandard['kModuleError'];
}
return deps_1.ZStandard.decompress(compressedData);
case exports.Compressor.zlib:
return zlibInflate(compressedData);
default:
return compressedData;
}
}
exports.decompress = decompress;
//# sourceMappingURL=compression.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"compression.js","sourceRoot":"","sources":["../../../src/cmap/wire_protocol/compression.ts"],"names":[],"mappings":";;;AAAA,+BAAiC;AACjC,6BAA6B;AAE7B,+CAAuD;AACvD,qCAA+C;AAC/C,uCAAiF;AAEjF,cAAc;AACD,QAAA,UAAU,GAAG,MAAM,CAAC,MAAM,CAAC;IACtC,IAAI,EAAE,CAAC;IACP,MAAM,EAAE,CAAC;IACT,IAAI,EAAE,CAAC;IACP,IAAI,EAAE,CAAC;CACC,CAAC,CAAC;AAQC,QAAA,sBAAsB,GAAG,IAAI,GAAG,CAAC;IAC5C,gCAAoB;IACpB,WAAW;IACX,cAAc;IACd,UAAU;IACV,cAAc;IACd,YAAY;IACZ,YAAY;IACZ,iBAAiB;IACjB,gBAAgB;IAChB,QAAQ;CACT,CAAC,CAAC;AAEH,MAAM,sBAAsB,GAAG,CAAC,CAAC;AAEjC,MAAM,WAAW,GAAG,IAAA,gBAAS,EAAC,IAAI,CAAC,OAAO,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC,CAAC;AACvD,MAAM,WAAW,GAAG,IAAA,gBAAS,EAAC,IAAI,CAAC,OAAO,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC,CAAC;AAEvD,8DAA8D;AACvD,KAAK,UAAU,QAAQ,CAC5B,OAA2E,EAC3E,kBAA0B;IAE1B,MAAM,WAAW,GAAG,EAAsB,CAAC;IAC3C,QAAQ,OAAO,CAAC,gBAAgB,EAAE;QAChC,KAAK,QAAQ;YACX,IAAI,cAAc,IAAI,aAAM,EAAE;gBAC5B,MAAM,aAAM,CAAC,cAAc,CAAC,CAAC;aAC9B;YACD,OAAO,aAAM,CAAC,QAAQ,CAAC,kBAAkB,CAAC,CAAC;QAE7C,KAAK,MAAM;YACT,IAAI,cAAc,IAAI,gBAAS,EAAE;gBAC/B,MAAM,gBAAS,CAAC,cAAc,CAAC,CAAC;aACjC;YACD,OAAO,gBAAS,CAAC,QAAQ,CAAC,kBAAkB,EAAE,sBAAsB,CAAC,CAAC;QAExE,KAAK,MAAM;YACT,IAAI,OAAO,CAAC,oBAAoB,EAAE;gBAChC,WAAW,CAAC,KAAK,GAAG,OAAO,CAAC,oBAAoB,CAAC;aAClD;YACD,OAAO,WAAW,CAAC,kBAAkB,EAAE,WAAW,CAAC,CAAC;QAEtD;YACE,MAAM,IAAI,iCAAyB,CACjC,sBAAsB,OAAO,CAAC,gBAAgB,qBAAqB,CACpE,CAAC;KACL;AACH,CAAC;AA7BD,4BA6BC;AAED,kDAAkD;AAC3C,KAAK,UAAU,UAAU,CAAC,YAAoB,EAAE,cAAsB;IAC3E,IACE,YAAY,KAAK,kBAAU,CAAC,MAAM;QAClC,YAAY,KAAK,kBAAU,CAAC,IAAI;QAChC,YAAY,KAAK,kBAAU,CAAC,IAAI;QAChC,YAAY,KAAK,kBAAU,CAAC,IAAI,EAChC;QACA,MAAM,IAAI,+BAAuB,CAC/B,2FAA2F,YAAY,GAAG,CAC3G,CAAC;KACH;IAED,QAAQ,YAAY,EAAE;QACpB,KAAK,kBAAU,CAAC,MAAM;YACpB,IAAI,cAAc,IAAI,aAAM,EAAE;gBAC5B,MAAM,aAAM,CAAC,cAAc,CAAC,CAAC;aAC9B;YACD,OAAO,aAAM,CAAC,UAAU,CAAC,cAAc,EAAE,EAAE,QAAQ,EAAE,IAAI,EAAE,CAAC,CAAC;QAE/D,KAAK,kBAAU,CAAC,IAAI;YAClB,IAAI,cAAc,IAAI,gBAAS,EAAE;gBAC/B,MAAM,gBAAS,CAAC,cAAc,CAAC,CAAC;aACjC;YACD,OAAO,gBAAS,CAAC,UAAU,CAAC,cAAc,CAAC,CAAC;QAE9C,KAAK,kBAAU,CAAC,IAAI;YAClB,OAAO,WAAW,CAAC,cAAc,CAAC,CAAC;QAErC;YACE,OAAO,cAAc,CAAC;KACzB;AACH,CAAC;AA/BD,gCA+BC"}

View file

@ -0,0 +1,15 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.OP_MSG = exports.OP_COMPRESSED = exports.OP_DELETE = exports.OP_QUERY = exports.OP_INSERT = exports.OP_UPDATE = exports.OP_REPLY = exports.MAX_SUPPORTED_WIRE_VERSION = exports.MIN_SUPPORTED_WIRE_VERSION = exports.MAX_SUPPORTED_SERVER_VERSION = exports.MIN_SUPPORTED_SERVER_VERSION = void 0;
exports.MIN_SUPPORTED_SERVER_VERSION = '3.6';
exports.MAX_SUPPORTED_SERVER_VERSION = '6.0';
exports.MIN_SUPPORTED_WIRE_VERSION = 6;
exports.MAX_SUPPORTED_WIRE_VERSION = 17;
exports.OP_REPLY = 1;
exports.OP_UPDATE = 2001;
exports.OP_INSERT = 2002;
exports.OP_QUERY = 2004;
exports.OP_DELETE = 2006;
exports.OP_COMPRESSED = 2012;
exports.OP_MSG = 2013;
//# sourceMappingURL=constants.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"constants.js","sourceRoot":"","sources":["../../../src/cmap/wire_protocol/constants.ts"],"names":[],"mappings":";;;AAAa,QAAA,4BAA4B,GAAG,KAAK,CAAC;AACrC,QAAA,4BAA4B,GAAG,KAAK,CAAC;AACrC,QAAA,0BAA0B,GAAG,CAAC,CAAC;AAC/B,QAAA,0BAA0B,GAAG,EAAE,CAAC;AAChC,QAAA,QAAQ,GAAG,CAAC,CAAC;AACb,QAAA,SAAS,GAAG,IAAI,CAAC;AACjB,QAAA,SAAS,GAAG,IAAI,CAAC;AACjB,QAAA,QAAQ,GAAG,IAAI,CAAC;AAChB,QAAA,SAAS,GAAG,IAAI,CAAC;AACjB,QAAA,aAAa,GAAG,IAAI,CAAC;AACrB,QAAA,MAAM,GAAG,IAAI,CAAC"}

55
node_modules/mongodb/lib/cmap/wire_protocol/shared.js generated vendored Normal file
View file

@ -0,0 +1,55 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.isSharded = exports.applyCommonQueryOptions = exports.getReadPreference = void 0;
const error_1 = require("../../error");
const read_preference_1 = require("../../read_preference");
const common_1 = require("../../sdam/common");
const topology_description_1 = require("../../sdam/topology_description");
function getReadPreference(cmd, options) {
// Default to command version of the readPreference
let readPreference = cmd.readPreference || read_preference_1.ReadPreference.primary;
// If we have an option readPreference override the command one
if (options?.readPreference) {
readPreference = options.readPreference;
}
if (typeof readPreference === 'string') {
readPreference = read_preference_1.ReadPreference.fromString(readPreference);
}
if (!(readPreference instanceof read_preference_1.ReadPreference)) {
throw new error_1.MongoInvalidArgumentError('Option "readPreference" must be a ReadPreference instance');
}
return readPreference;
}
exports.getReadPreference = getReadPreference;
function applyCommonQueryOptions(queryOptions, options) {
Object.assign(queryOptions, {
raw: typeof options.raw === 'boolean' ? options.raw : false,
promoteLongs: typeof options.promoteLongs === 'boolean' ? options.promoteLongs : true,
promoteValues: typeof options.promoteValues === 'boolean' ? options.promoteValues : true,
promoteBuffers: typeof options.promoteBuffers === 'boolean' ? options.promoteBuffers : false,
bsonRegExp: typeof options.bsonRegExp === 'boolean' ? options.bsonRegExp : false,
enableUtf8Validation: typeof options.enableUtf8Validation === 'boolean' ? options.enableUtf8Validation : true
});
if (options.session) {
queryOptions.session = options.session;
}
return queryOptions;
}
exports.applyCommonQueryOptions = applyCommonQueryOptions;
function isSharded(topologyOrServer) {
if (topologyOrServer == null) {
return false;
}
if (topologyOrServer.description && topologyOrServer.description.type === common_1.ServerType.Mongos) {
return true;
}
// NOTE: This is incredibly inefficient, and should be removed once command construction
// happens based on `Server` not `Topology`.
if (topologyOrServer.description && topologyOrServer.description instanceof topology_description_1.TopologyDescription) {
const servers = Array.from(topologyOrServer.description.servers.values());
return servers.some((server) => server.type === common_1.ServerType.Mongos);
}
return false;
}
exports.isSharded = isSharded;
//# sourceMappingURL=shared.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"shared.js","sourceRoot":"","sources":["../../../src/cmap/wire_protocol/shared.ts"],"names":[],"mappings":";;;AACA,uCAAwD;AAExD,2DAAuD;AACvD,8CAA+C;AAI/C,0EAAsE;AAQtE,SAAgB,iBAAiB,CAAC,GAAa,EAAE,OAA8B;IAC7E,mDAAmD;IACnD,IAAI,cAAc,GAAG,GAAG,CAAC,cAAc,IAAI,gCAAc,CAAC,OAAO,CAAC;IAClE,+DAA+D;IAC/D,IAAI,OAAO,EAAE,cAAc,EAAE;QAC3B,cAAc,GAAG,OAAO,CAAC,cAAc,CAAC;KACzC;IAED,IAAI,OAAO,cAAc,KAAK,QAAQ,EAAE;QACtC,cAAc,GAAG,gCAAc,CAAC,UAAU,CAAC,cAAc,CAAC,CAAC;KAC5D;IAED,IAAI,CAAC,CAAC,cAAc,YAAY,gCAAc,CAAC,EAAE;QAC/C,MAAM,IAAI,iCAAyB,CACjC,2DAA2D,CAC5D,CAAC;KACH;IAED,OAAO,cAAc,CAAC;AACxB,CAAC;AAnBD,8CAmBC;AAED,SAAgB,uBAAuB,CACrC,YAA4B,EAC5B,OAAuB;IAEvB,MAAM,CAAC,MAAM,CAAC,YAAY,EAAE;QAC1B,GAAG,EAAE,OAAO,OAAO,CAAC,GAAG,KAAK,SAAS,CAAC,CAAC,CAAC,OAAO,CAAC,GAAG,CAAC,CAAC,CAAC,KAAK;QAC3D,YAAY,EAAE,OAAO,OAAO,CAAC,YAAY,KAAK,SAAS,CAAC,CAAC,CAAC,OAAO,CAAC,YAAY,CAAC,CAAC,CAAC,IAAI;QACrF,aAAa,EAAE,OAAO,OAAO,CAAC,aAAa,KAAK,SAAS,CAAC,CAAC,CAAC,OAAO,CAAC,aAAa,CAAC,CAAC,CAAC,IAAI;QACxF,cAAc,EAAE,OAAO,OAAO,CAAC,cAAc,KAAK,SAAS,CAAC,CAAC,CAAC,OAAO,CAAC,cAAc,CAAC,CAAC,CAAC,KAAK;QAC5F,UAAU,EAAE,OAAO,OAAO,CAAC,UAAU,KAAK,SAAS,CAAC,CAAC,CAAC,OAAO,CAAC,UAAU,CAAC,CAAC,CAAC,KAAK;QAChF,oBAAoB,EAClB,OAAO,OAAO,CAAC,oBAAoB,KAAK,SAAS,CAAC,CAAC,CAAC,OAAO,CAAC,oBAAoB,CAAC,CAAC,CAAC,IAAI;KAC1F,CAAC,CAAC;IAEH,IAAI,OAAO,CAAC,OAAO,EAAE;QACnB,YAAY,CAAC,OAAO,GAAG,OAAO,CAAC,OAAO,CAAC;KACxC;IAED,OAAO,YAAY,CAAC;AACtB,CAAC;AAnBD,0DAmBC;AAED,SAAgB,SAAS,CAAC,gBAAiD;IACzE,IAAI,gBAAgB,IAAI,IAAI,EAAE;QAC5B,OAAO,KAAK,CAAC;KACd;IAED,IAAI,gBAAgB,CAAC,WAAW,IAAI,gBAAgB,CAAC,WAAW,CAAC,IAAI,KAAK,mBAAU,CAAC,MAAM,EAAE;QAC3F,OAAO,IAAI,CAAC;KACb;IAED,wFAAwF;IACxF,kDAAkD;IAClD,IAAI,gBAAgB,CAAC,WAAW,IAAI,gBAAgB,CAAC,WAAW,YAAY,0CAAmB,EAAE;QAC/F,MAAM,OAAO,GAAwB,KAAK,CAAC,IAAI,CAAC,gBAAgB,CAAC,WAAW,CAAC,OAAO,CAAC,MAAM,EAAE,CAAC,CAAC;QAC/F,OAAO,OAAO,CAAC,IAAI,CAAC,CAAC,MAAyB,EAAE,EAAE,CAAC,MAAM,CAAC,IAAI,KAAK,mBAAU,CAAC,MAAM,CAAC,CAAC;KACvF;IAED,OAAO,KAAK,CAAC;AACf,CAAC;AAjBD,8BAiBC"}

576
node_modules/mongodb/lib/collection.js generated vendored Normal file
View file

@ -0,0 +1,576 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.Collection = void 0;
const bson_1 = require("./bson");
const ordered_1 = require("./bulk/ordered");
const unordered_1 = require("./bulk/unordered");
const change_stream_1 = require("./change_stream");
const aggregation_cursor_1 = require("./cursor/aggregation_cursor");
const find_cursor_1 = require("./cursor/find_cursor");
const list_indexes_cursor_1 = require("./cursor/list_indexes_cursor");
const error_1 = require("./error");
const bulk_write_1 = require("./operations/bulk_write");
const count_1 = require("./operations/count");
const count_documents_1 = require("./operations/count_documents");
const delete_1 = require("./operations/delete");
const distinct_1 = require("./operations/distinct");
const drop_1 = require("./operations/drop");
const estimated_document_count_1 = require("./operations/estimated_document_count");
const execute_operation_1 = require("./operations/execute_operation");
const find_and_modify_1 = require("./operations/find_and_modify");
const indexes_1 = require("./operations/indexes");
const insert_1 = require("./operations/insert");
const is_capped_1 = require("./operations/is_capped");
const options_operation_1 = require("./operations/options_operation");
const rename_1 = require("./operations/rename");
const stats_1 = require("./operations/stats");
const update_1 = require("./operations/update");
const read_concern_1 = require("./read_concern");
const read_preference_1 = require("./read_preference");
const utils_1 = require("./utils");
const write_concern_1 = require("./write_concern");
/**
* The **Collection** class is an internal class that embodies a MongoDB collection
* allowing for insert/find/update/delete and other command operation on that MongoDB collection.
*
* **COLLECTION Cannot directly be instantiated**
* @public
*
* @example
* ```ts
* import { MongoClient } from 'mongodb';
*
* interface Pet {
* name: string;
* kind: 'dog' | 'cat' | 'fish';
* }
*
* const client = new MongoClient('mongodb://localhost:27017');
* const pets = client.db().collection<Pet>('pets');
*
* const petCursor = pets.find();
*
* for await (const pet of petCursor) {
* console.log(`${pet.name} is a ${pet.kind}!`);
* }
* ```
*/
class Collection {
/**
* Create a new Collection instance
* @internal
*/
constructor(db, name, options) {
(0, utils_1.checkCollectionName)(name);
// Internal state
this.s = {
db,
options,
namespace: new utils_1.MongoDBNamespace(db.databaseName, name),
pkFactory: db.options?.pkFactory ?? utils_1.DEFAULT_PK_FACTORY,
readPreference: read_preference_1.ReadPreference.fromOptions(options),
bsonOptions: (0, bson_1.resolveBSONOptions)(options, db),
readConcern: read_concern_1.ReadConcern.fromOptions(options),
writeConcern: write_concern_1.WriteConcern.fromOptions(options)
};
}
/**
* The name of the database this collection belongs to
*/
get dbName() {
return this.s.namespace.db;
}
/**
* The name of this collection
*/
get collectionName() {
// eslint-disable-next-line @typescript-eslint/no-non-null-assertion
return this.s.namespace.collection;
}
/**
* The namespace of this collection, in the format `${this.dbName}.${this.collectionName}`
*/
get namespace() {
return this.s.namespace.toString();
}
/**
* The current readConcern of the collection. If not explicitly defined for
* this collection, will be inherited from the parent DB
*/
get readConcern() {
if (this.s.readConcern == null) {
return this.s.db.readConcern;
}
return this.s.readConcern;
}
/**
* The current readPreference of the collection. If not explicitly defined for
* this collection, will be inherited from the parent DB
*/
get readPreference() {
if (this.s.readPreference == null) {
return this.s.db.readPreference;
}
return this.s.readPreference;
}
get bsonOptions() {
return this.s.bsonOptions;
}
/**
* The current writeConcern of the collection. If not explicitly defined for
* this collection, will be inherited from the parent DB
*/
get writeConcern() {
if (this.s.writeConcern == null) {
return this.s.db.writeConcern;
}
return this.s.writeConcern;
}
/** The current index hint for the collection */
get hint() {
return this.s.collectionHint;
}
set hint(v) {
this.s.collectionHint = (0, utils_1.normalizeHintField)(v);
}
/**
* Inserts a single document into MongoDB. If documents passed in do not contain the **_id** field,
* one will be added to each of the documents missing it by the driver, mutating the document. This behavior
* can be overridden by setting the **forceServerObjectId** flag.
*
* @param doc - The document to insert
* @param options - Optional settings for the command
*/
async insertOne(doc, options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new insert_1.InsertOneOperation(this, doc, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Inserts an array of documents into MongoDB. If documents passed in do not contain the **_id** field,
* one will be added to each of the documents missing it by the driver, mutating the document. This behavior
* can be overridden by setting the **forceServerObjectId** flag.
*
* @param docs - The documents to insert
* @param options - Optional settings for the command
*/
async insertMany(docs, options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new insert_1.InsertManyOperation(this, docs, (0, utils_1.resolveOptions)(this, options ?? { ordered: true })));
}
/**
* Perform a bulkWrite operation without a fluent API
*
* Legal operation types are
* - `insertOne`
* - `replaceOne`
* - `updateOne`
* - `updateMany`
* - `deleteOne`
* - `deleteMany`
*
* If documents passed in do not contain the **_id** field,
* one will be added to each of the documents missing it by the driver, mutating the document. This behavior
* can be overridden by setting the **forceServerObjectId** flag.
*
* @param operations - Bulk operations to perform
* @param options - Optional settings for the command
* @throws MongoDriverError if operations is not an array
*/
async bulkWrite(operations, options) {
if (!Array.isArray(operations)) {
throw new error_1.MongoInvalidArgumentError('Argument "operations" must be an array of documents');
}
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new bulk_write_1.BulkWriteOperation(this, operations, (0, utils_1.resolveOptions)(this, options ?? { ordered: true })));
}
/**
* Update a single document in a collection
*
* @param filter - The filter used to select the document to update
* @param update - The update operations to be applied to the document
* @param options - Optional settings for the command
*/
async updateOne(filter, update, options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new update_1.UpdateOneOperation(this, filter, update, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Replace a document in a collection with another document
*
* @param filter - The filter used to select the document to replace
* @param replacement - The Document that replaces the matching document
* @param options - Optional settings for the command
*/
async replaceOne(filter, replacement, options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new update_1.ReplaceOneOperation(this, filter, replacement, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Update multiple documents in a collection
*
* @param filter - The filter used to select the documents to update
* @param update - The update operations to be applied to the documents
* @param options - Optional settings for the command
*/
async updateMany(filter, update, options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new update_1.UpdateManyOperation(this, filter, update, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Delete a document from a collection
*
* @param filter - The filter used to select the document to remove
* @param options - Optional settings for the command
*/
async deleteOne(filter = {}, options = {}) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new delete_1.DeleteOneOperation(this, filter, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Delete multiple documents from a collection
*
* @param filter - The filter used to select the documents to remove
* @param options - Optional settings for the command
*/
async deleteMany(filter = {}, options = {}) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new delete_1.DeleteManyOperation(this, filter, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Rename the collection.
*
* @remarks
* This operation does not inherit options from the Db or MongoClient.
*
* @param newName - New name of of the collection.
* @param options - Optional settings for the command
*/
async rename(newName, options) {
// Intentionally, we do not inherit options from parent for this operation.
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new rename_1.RenameOperation(this, newName, {
...options,
readPreference: read_preference_1.ReadPreference.PRIMARY
}));
}
/**
* Drop the collection from the database, removing it permanently. New accesses will create a new collection.
*
* @param options - Optional settings for the command
*/
async drop(options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new drop_1.DropCollectionOperation(this.s.db, this.collectionName, options));
}
async findOne(filter = {}, options = {}) {
return this.find(filter, options).limit(-1).batchSize(1).next();
}
find(filter = {}, options = {}) {
return new find_cursor_1.FindCursor(this.s.db.s.client, this.s.namespace, filter, (0, utils_1.resolveOptions)(this, options));
}
/**
* Returns the options of the collection.
*
* @param options - Optional settings for the command
*/
async options(options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new options_operation_1.OptionsOperation(this, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Returns if the collection is a capped collection
*
* @param options - Optional settings for the command
*/
async isCapped(options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new is_capped_1.IsCappedOperation(this, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Creates an index on the db and collection collection.
*
* @param indexSpec - The field name or index specification to create an index for
* @param options - Optional settings for the command
*
* @example
* ```ts
* const collection = client.db('foo').collection('bar');
*
* await collection.createIndex({ a: 1, b: -1 });
*
* // Alternate syntax for { c: 1, d: -1 } that ensures order of indexes
* await collection.createIndex([ [c, 1], [d, -1] ]);
*
* // Equivalent to { e: 1 }
* await collection.createIndex('e');
*
* // Equivalent to { f: 1, g: 1 }
* await collection.createIndex(['f', 'g'])
*
* // Equivalent to { h: 1, i: -1 }
* await collection.createIndex([ { h: 1 }, { i: -1 } ]);
*
* // Equivalent to { j: 1, k: -1, l: 2d }
* await collection.createIndex(['j', ['k', -1], { l: '2d' }])
* ```
*/
async createIndex(indexSpec, options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new indexes_1.CreateIndexOperation(this, this.collectionName, indexSpec, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Creates multiple indexes in the collection, this method is only supported for
* MongoDB 2.6 or higher. Earlier version of MongoDB will throw a command not supported
* error.
*
* **Note**: Unlike {@link Collection#createIndex| createIndex}, this function takes in raw index specifications.
* Index specifications are defined {@link http://docs.mongodb.org/manual/reference/command/createIndexes/| here}.
*
* @param indexSpecs - An array of index specifications to be created
* @param options - Optional settings for the command
*
* @example
* ```ts
* const collection = client.db('foo').collection('bar');
* await collection.createIndexes([
* // Simple index on field fizz
* {
* key: { fizz: 1 },
* }
* // wildcard index
* {
* key: { '$**': 1 }
* },
* // named index on darmok and jalad
* {
* key: { darmok: 1, jalad: -1 }
* name: 'tanagra'
* }
* ]);
* ```
*/
async createIndexes(indexSpecs, options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new indexes_1.CreateIndexesOperation(this, this.collectionName, indexSpecs, (0, utils_1.resolveOptions)(this, { ...options, maxTimeMS: undefined })));
}
/**
* Drops an index from this collection.
*
* @param indexName - Name of the index to drop.
* @param options - Optional settings for the command
*/
async dropIndex(indexName, options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new indexes_1.DropIndexOperation(this, indexName, {
...(0, utils_1.resolveOptions)(this, options),
readPreference: read_preference_1.ReadPreference.primary
}));
}
/**
* Drops all indexes from this collection.
*
* @param options - Optional settings for the command
*/
async dropIndexes(options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new indexes_1.DropIndexesOperation(this, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Get the list of all indexes information for the collection.
*
* @param options - Optional settings for the command
*/
listIndexes(options) {
return new list_indexes_cursor_1.ListIndexesCursor(this, (0, utils_1.resolveOptions)(this, options));
}
/**
* Checks if one or more indexes exist on the collection, fails on first non-existing index
*
* @param indexes - One or more index names to check.
* @param options - Optional settings for the command
*/
async indexExists(indexes, options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new indexes_1.IndexExistsOperation(this, indexes, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Retrieves this collections index info.
*
* @param options - Optional settings for the command
*/
async indexInformation(options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new indexes_1.IndexInformationOperation(this.s.db, this.collectionName, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Gets an estimate of the count of documents in a collection using collection metadata.
* This will always run a count command on all server versions.
*
* due to an oversight in versions 5.0.0-5.0.8 of MongoDB, the count command,
* which estimatedDocumentCount uses in its implementation, was not included in v1 of
* the Stable API, and so users of the Stable API with estimatedDocumentCount are
* recommended to upgrade their server version to 5.0.9+ or set apiStrict: false to avoid
* encountering errors.
*
* @see {@link https://www.mongodb.com/docs/manual/reference/command/count/#behavior|Count: Behavior}
* @param options - Optional settings for the command
*/
async estimatedDocumentCount(options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new estimated_document_count_1.EstimatedDocumentCountOperation(this, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Gets the number of documents matching the filter.
* For a fast count of the total documents in a collection see {@link Collection#estimatedDocumentCount| estimatedDocumentCount}.
* **Note**: When migrating from {@link Collection#count| count} to {@link Collection#countDocuments| countDocuments}
* the following query operators must be replaced:
*
* | Operator | Replacement |
* | -------- | ----------- |
* | `$where` | [`$expr`][1] |
* | `$near` | [`$geoWithin`][2] with [`$center`][3] |
* | `$nearSphere` | [`$geoWithin`][2] with [`$centerSphere`][4] |
*
* [1]: https://docs.mongodb.com/manual/reference/operator/query/expr/
* [2]: https://docs.mongodb.com/manual/reference/operator/query/geoWithin/
* [3]: https://docs.mongodb.com/manual/reference/operator/query/center/#op._S_center
* [4]: https://docs.mongodb.com/manual/reference/operator/query/centerSphere/#op._S_centerSphere
*
* @param filter - The filter for the count
* @param options - Optional settings for the command
*
* @see https://docs.mongodb.com/manual/reference/operator/query/expr/
* @see https://docs.mongodb.com/manual/reference/operator/query/geoWithin/
* @see https://docs.mongodb.com/manual/reference/operator/query/center/#op._S_center
* @see https://docs.mongodb.com/manual/reference/operator/query/centerSphere/#op._S_centerSphere
*/
async countDocuments(filter = {}, options = {}) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new count_documents_1.CountDocumentsOperation(this, filter, (0, utils_1.resolveOptions)(this, options)));
}
async distinct(key, filter = {}, options = {}) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new distinct_1.DistinctOperation(this, key, filter, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Retrieve all the indexes on the collection.
*
* @param options - Optional settings for the command
*/
async indexes(options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new indexes_1.IndexesOperation(this, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Get all the collection statistics.
*
* @param options - Optional settings for the command
*/
async stats(options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new stats_1.CollStatsOperation(this, options));
}
/**
* Find a document and delete it in one atomic operation. Requires a write lock for the duration of the operation.
*
* @param filter - The filter used to select the document to remove
* @param options - Optional settings for the command
*/
async findOneAndDelete(filter, options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new find_and_modify_1.FindOneAndDeleteOperation(this, filter, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Find a document and replace it in one atomic operation. Requires a write lock for the duration of the operation.
*
* @param filter - The filter used to select the document to replace
* @param replacement - The Document that replaces the matching document
* @param options - Optional settings for the command
*/
async findOneAndReplace(filter, replacement, options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new find_and_modify_1.FindOneAndReplaceOperation(this, filter, replacement, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Find a document and update it in one atomic operation. Requires a write lock for the duration of the operation.
*
* @param filter - The filter used to select the document to update
* @param update - Update operations to be performed on the document
* @param options - Optional settings for the command
*/
async findOneAndUpdate(filter, update, options) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new find_and_modify_1.FindOneAndUpdateOperation(this, filter, update, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Execute an aggregation framework pipeline against the collection, needs MongoDB \>= 2.2
*
* @param pipeline - An array of aggregation pipelines to execute
* @param options - Optional settings for the command
*/
aggregate(pipeline = [], options) {
if (!Array.isArray(pipeline)) {
throw new error_1.MongoInvalidArgumentError('Argument "pipeline" must be an array of aggregation stages');
}
return new aggregation_cursor_1.AggregationCursor(this.s.db.s.client, this.s.namespace, pipeline, (0, utils_1.resolveOptions)(this, options));
}
/**
* Create a new Change Stream, watching for new changes (insertions, updates, replacements, deletions, and invalidations) in this collection.
*
* @remarks
* watch() accepts two generic arguments for distinct use cases:
* - The first is to override the schema that may be defined for this specific collection
* - The second is to override the shape of the change stream document entirely, if it is not provided the type will default to ChangeStreamDocument of the first argument
* @example
* By just providing the first argument I can type the change to be `ChangeStreamDocument<{ _id: number }>`
* ```ts
* collection.watch<{ _id: number }>()
* .on('change', change => console.log(change._id.toFixed(4)));
* ```
*
* @example
* Passing a second argument provides a way to reflect the type changes caused by an advanced pipeline.
* Here, we are using a pipeline to have MongoDB filter for insert changes only and add a comment.
* No need start from scratch on the ChangeStreamInsertDocument type!
* By using an intersection we can save time and ensure defaults remain the same type!
* ```ts
* collection
* .watch<Schema, ChangeStreamInsertDocument<Schema> & { comment: string }>([
* { $addFields: { comment: 'big changes' } },
* { $match: { operationType: 'insert' } }
* ])
* .on('change', change => {
* change.comment.startsWith('big');
* change.operationType === 'insert';
* // No need to narrow in code because the generics did that for us!
* expectType<Schema>(change.fullDocument);
* });
* ```
*
* @param pipeline - An array of {@link https://docs.mongodb.com/manual/reference/operator/aggregation-pipeline/|aggregation pipeline stages} through which to pass change stream documents. This allows for filtering (using $match) and manipulating the change stream documents.
* @param options - Optional settings for the command
* @typeParam TLocal - Type of the data being detected by the change stream
* @typeParam TChange - Type of the whole change stream document emitted
*/
watch(pipeline = [], options = {}) {
// Allow optionally not specifying a pipeline
if (!Array.isArray(pipeline)) {
options = pipeline;
pipeline = [];
}
return new change_stream_1.ChangeStream(this, pipeline, (0, utils_1.resolveOptions)(this, options));
}
/**
* Initiate an Out of order batch write operation. All operations will be buffered into insert/update/remove commands executed out of order.
*
* @throws MongoNotConnectedError
* @remarks
* **NOTE:** MongoClient must be connected prior to calling this method due to a known limitation in this legacy implementation.
* However, `collection.bulkWrite()` provides an equivalent API that does not require prior connecting.
*/
initializeUnorderedBulkOp(options) {
return new unordered_1.UnorderedBulkOperation(this, (0, utils_1.resolveOptions)(this, options));
}
/**
* Initiate an In order bulk write operation. Operations will be serially executed in the order they are added, creating a new operation for each switch in types.
*
* @throws MongoNotConnectedError
* @remarks
* **NOTE:** MongoClient must be connected prior to calling this method due to a known limitation in this legacy implementation.
* However, `collection.bulkWrite()` provides an equivalent API that does not require prior connecting.
*/
initializeOrderedBulkOp(options) {
return new ordered_1.OrderedBulkOperation(this, (0, utils_1.resolveOptions)(this, options));
}
/**
* An estimated count of matching documents in the db to a filter.
*
* **NOTE:** This method has been deprecated, since it does not provide an accurate count of the documents
* in a collection. To obtain an accurate count of documents in the collection, use {@link Collection#countDocuments| countDocuments}.
* To obtain an estimated count of all documents in the collection, use {@link Collection#estimatedDocumentCount| estimatedDocumentCount}.
*
* @deprecated use {@link Collection#countDocuments| countDocuments} or {@link Collection#estimatedDocumentCount| estimatedDocumentCount} instead
*
* @param filter - The filter for the count.
* @param options - Optional settings for the command
*/
async count(filter = {}, options = {}) {
return (0, execute_operation_1.executeOperation)(this.s.db.s.client, new count_1.CountOperation(utils_1.MongoDBNamespace.fromString(this.namespace), filter, (0, utils_1.resolveOptions)(this, options)));
}
}
exports.Collection = Collection;
//# sourceMappingURL=collection.js.map

1
node_modules/mongodb/lib/collection.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

1106
node_modules/mongodb/lib/connection_string.js generated vendored Normal file

File diff suppressed because it is too large Load diff

1
node_modules/mongodb/lib/connection_string.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

131
node_modules/mongodb/lib/constants.js generated vendored Normal file
View file

@ -0,0 +1,131 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.TOPOLOGY_EVENTS = exports.CMAP_EVENTS = exports.HEARTBEAT_EVENTS = exports.RESUME_TOKEN_CHANGED = exports.END = exports.CHANGE = exports.INIT = exports.MORE = exports.RESPONSE = exports.SERVER_HEARTBEAT_FAILED = exports.SERVER_HEARTBEAT_SUCCEEDED = exports.SERVER_HEARTBEAT_STARTED = exports.COMMAND_FAILED = exports.COMMAND_SUCCEEDED = exports.COMMAND_STARTED = exports.CLUSTER_TIME_RECEIVED = exports.CONNECTION_CHECKED_IN = exports.CONNECTION_CHECKED_OUT = exports.CONNECTION_CHECK_OUT_FAILED = exports.CONNECTION_CHECK_OUT_STARTED = exports.CONNECTION_CLOSED = exports.CONNECTION_READY = exports.CONNECTION_CREATED = exports.CONNECTION_POOL_READY = exports.CONNECTION_POOL_CLEARED = exports.CONNECTION_POOL_CLOSED = exports.CONNECTION_POOL_CREATED = exports.TOPOLOGY_DESCRIPTION_CHANGED = exports.TOPOLOGY_CLOSED = exports.TOPOLOGY_OPENING = exports.SERVER_DESCRIPTION_CHANGED = exports.SERVER_CLOSED = exports.SERVER_OPENING = exports.DESCRIPTION_RECEIVED = exports.UNPINNED = exports.PINNED = exports.MESSAGE = exports.ENDED = exports.CLOSED = exports.CONNECT = exports.OPEN = exports.CLOSE = exports.TIMEOUT = exports.ERROR = exports.SYSTEM_JS_COLLECTION = exports.SYSTEM_COMMAND_COLLECTION = exports.SYSTEM_USER_COLLECTION = exports.SYSTEM_PROFILE_COLLECTION = exports.SYSTEM_INDEX_COLLECTION = exports.SYSTEM_NAMESPACE_COLLECTION = void 0;
exports.LEGACY_HELLO_COMMAND_CAMEL_CASE = exports.LEGACY_HELLO_COMMAND = exports.MONGO_CLIENT_EVENTS = exports.LOCAL_SERVER_EVENTS = exports.SERVER_RELAY_EVENTS = exports.APM_EVENTS = void 0;
exports.SYSTEM_NAMESPACE_COLLECTION = 'system.namespaces';
exports.SYSTEM_INDEX_COLLECTION = 'system.indexes';
exports.SYSTEM_PROFILE_COLLECTION = 'system.profile';
exports.SYSTEM_USER_COLLECTION = 'system.users';
exports.SYSTEM_COMMAND_COLLECTION = '$cmd';
exports.SYSTEM_JS_COLLECTION = 'system.js';
// events
exports.ERROR = 'error';
exports.TIMEOUT = 'timeout';
exports.CLOSE = 'close';
exports.OPEN = 'open';
exports.CONNECT = 'connect';
exports.CLOSED = 'closed';
exports.ENDED = 'ended';
exports.MESSAGE = 'message';
exports.PINNED = 'pinned';
exports.UNPINNED = 'unpinned';
exports.DESCRIPTION_RECEIVED = 'descriptionReceived';
exports.SERVER_OPENING = 'serverOpening';
exports.SERVER_CLOSED = 'serverClosed';
exports.SERVER_DESCRIPTION_CHANGED = 'serverDescriptionChanged';
exports.TOPOLOGY_OPENING = 'topologyOpening';
exports.TOPOLOGY_CLOSED = 'topologyClosed';
exports.TOPOLOGY_DESCRIPTION_CHANGED = 'topologyDescriptionChanged';
exports.CONNECTION_POOL_CREATED = 'connectionPoolCreated';
exports.CONNECTION_POOL_CLOSED = 'connectionPoolClosed';
exports.CONNECTION_POOL_CLEARED = 'connectionPoolCleared';
exports.CONNECTION_POOL_READY = 'connectionPoolReady';
exports.CONNECTION_CREATED = 'connectionCreated';
exports.CONNECTION_READY = 'connectionReady';
exports.CONNECTION_CLOSED = 'connectionClosed';
exports.CONNECTION_CHECK_OUT_STARTED = 'connectionCheckOutStarted';
exports.CONNECTION_CHECK_OUT_FAILED = 'connectionCheckOutFailed';
exports.CONNECTION_CHECKED_OUT = 'connectionCheckedOut';
exports.CONNECTION_CHECKED_IN = 'connectionCheckedIn';
exports.CLUSTER_TIME_RECEIVED = 'clusterTimeReceived';
exports.COMMAND_STARTED = 'commandStarted';
exports.COMMAND_SUCCEEDED = 'commandSucceeded';
exports.COMMAND_FAILED = 'commandFailed';
exports.SERVER_HEARTBEAT_STARTED = 'serverHeartbeatStarted';
exports.SERVER_HEARTBEAT_SUCCEEDED = 'serverHeartbeatSucceeded';
exports.SERVER_HEARTBEAT_FAILED = 'serverHeartbeatFailed';
exports.RESPONSE = 'response';
exports.MORE = 'more';
exports.INIT = 'init';
exports.CHANGE = 'change';
exports.END = 'end';
exports.RESUME_TOKEN_CHANGED = 'resumeTokenChanged';
/** @public */
exports.HEARTBEAT_EVENTS = Object.freeze([
exports.SERVER_HEARTBEAT_STARTED,
exports.SERVER_HEARTBEAT_SUCCEEDED,
exports.SERVER_HEARTBEAT_FAILED
]);
/** @public */
exports.CMAP_EVENTS = Object.freeze([
exports.CONNECTION_POOL_CREATED,
exports.CONNECTION_POOL_READY,
exports.CONNECTION_POOL_CLEARED,
exports.CONNECTION_POOL_CLOSED,
exports.CONNECTION_CREATED,
exports.CONNECTION_READY,
exports.CONNECTION_CLOSED,
exports.CONNECTION_CHECK_OUT_STARTED,
exports.CONNECTION_CHECK_OUT_FAILED,
exports.CONNECTION_CHECKED_OUT,
exports.CONNECTION_CHECKED_IN
]);
/** @public */
exports.TOPOLOGY_EVENTS = Object.freeze([
exports.SERVER_OPENING,
exports.SERVER_CLOSED,
exports.SERVER_DESCRIPTION_CHANGED,
exports.TOPOLOGY_OPENING,
exports.TOPOLOGY_CLOSED,
exports.TOPOLOGY_DESCRIPTION_CHANGED,
exports.ERROR,
exports.TIMEOUT,
exports.CLOSE
]);
/** @public */
exports.APM_EVENTS = Object.freeze([
exports.COMMAND_STARTED,
exports.COMMAND_SUCCEEDED,
exports.COMMAND_FAILED
]);
/**
* All events that we relay to the `Topology`
* @internal
*/
exports.SERVER_RELAY_EVENTS = Object.freeze([
exports.SERVER_HEARTBEAT_STARTED,
exports.SERVER_HEARTBEAT_SUCCEEDED,
exports.SERVER_HEARTBEAT_FAILED,
exports.COMMAND_STARTED,
exports.COMMAND_SUCCEEDED,
exports.COMMAND_FAILED,
...exports.CMAP_EVENTS
]);
/**
* All events we listen to from `Server` instances, but do not forward to the client
* @internal
*/
exports.LOCAL_SERVER_EVENTS = Object.freeze([
exports.CONNECT,
exports.DESCRIPTION_RECEIVED,
exports.CLOSED,
exports.ENDED
]);
/** @public */
exports.MONGO_CLIENT_EVENTS = Object.freeze([
...exports.CMAP_EVENTS,
...exports.APM_EVENTS,
...exports.TOPOLOGY_EVENTS,
...exports.HEARTBEAT_EVENTS
]);
/**
* @internal
* The legacy hello command that was deprecated in MongoDB 5.0.
*/
exports.LEGACY_HELLO_COMMAND = 'ismaster';
/**
* @internal
* The legacy hello command that was deprecated in MongoDB 5.0.
*/
exports.LEGACY_HELLO_COMMAND_CAMEL_CASE = 'isMaster';
//# sourceMappingURL=constants.js.map

1
node_modules/mongodb/lib/constants.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"constants.js","sourceRoot":"","sources":["../src/constants.ts"],"names":[],"mappings":";;;;AAAa,QAAA,2BAA2B,GAAG,mBAAmB,CAAC;AAClD,QAAA,uBAAuB,GAAG,gBAAgB,CAAC;AAC3C,QAAA,yBAAyB,GAAG,gBAAgB,CAAC;AAC7C,QAAA,sBAAsB,GAAG,cAAc,CAAC;AACxC,QAAA,yBAAyB,GAAG,MAAM,CAAC;AACnC,QAAA,oBAAoB,GAAG,WAAW,CAAC;AAEhD,SAAS;AACI,QAAA,KAAK,GAAG,OAAgB,CAAC;AACzB,QAAA,OAAO,GAAG,SAAkB,CAAC;AAC7B,QAAA,KAAK,GAAG,OAAgB,CAAC;AACzB,QAAA,IAAI,GAAG,MAAe,CAAC;AACvB,QAAA,OAAO,GAAG,SAAkB,CAAC;AAC7B,QAAA,MAAM,GAAG,QAAiB,CAAC;AAC3B,QAAA,KAAK,GAAG,OAAgB,CAAC;AACzB,QAAA,OAAO,GAAG,SAAkB,CAAC;AAC7B,QAAA,MAAM,GAAG,QAAiB,CAAC;AAC3B,QAAA,QAAQ,GAAG,UAAmB,CAAC;AAC/B,QAAA,oBAAoB,GAAG,qBAAqB,CAAC;AAC7C,QAAA,cAAc,GAAG,eAAwB,CAAC;AAC1C,QAAA,aAAa,GAAG,cAAuB,CAAC;AACxC,QAAA,0BAA0B,GAAG,0BAAmC,CAAC;AACjE,QAAA,gBAAgB,GAAG,iBAA0B,CAAC;AAC9C,QAAA,eAAe,GAAG,gBAAyB,CAAC;AAC5C,QAAA,4BAA4B,GAAG,4BAAqC,CAAC;AACrE,QAAA,uBAAuB,GAAG,uBAAgC,CAAC;AAC3D,QAAA,sBAAsB,GAAG,sBAA+B,CAAC;AACzD,QAAA,uBAAuB,GAAG,uBAAgC,CAAC;AAC3D,QAAA,qBAAqB,GAAG,qBAA8B,CAAC;AACvD,QAAA,kBAAkB,GAAG,mBAA4B,CAAC;AAClD,QAAA,gBAAgB,GAAG,iBAA0B,CAAC;AAC9C,QAAA,iBAAiB,GAAG,kBAA2B,CAAC;AAChD,QAAA,4BAA4B,GAAG,2BAAoC,CAAC;AACpE,QAAA,2BAA2B,GAAG,0BAAmC,CAAC;AAClE,QAAA,sBAAsB,GAAG,sBAA+B,CAAC;AACzD,QAAA,qBAAqB,GAAG,qBAA8B,CAAC;AACvD,QAAA,qBAAqB,GAAG,qBAA8B,CAAC;AACvD,QAAA,eAAe,GAAG,gBAAyB,CAAC;AAC5C,QAAA,iBAAiB,GAAG,kBAA2B,CAAC;AAChD,QAAA,cAAc,GAAG,eAAwB,CAAC;AAC1C,QAAA,wBAAwB,GAAG,wBAAiC,CAAC;AAC7D,QAAA,0BAA0B,GAAG,0BAAmC,CAAC;AACjE,QAAA,uBAAuB,GAAG,uBAAgC,CAAC;AAC3D,QAAA,QAAQ,GAAG,UAAmB,CAAC;AAC/B,QAAA,IAAI,GAAG,MAAe,CAAC;AACvB,QAAA,IAAI,GAAG,MAAe,CAAC;AACvB,QAAA,MAAM,GAAG,QAAiB,CAAC;AAC3B,QAAA,GAAG,GAAG,KAAc,CAAC;AACrB,QAAA,oBAAoB,GAAG,oBAA6B,CAAC;AAElE,cAAc;AACD,QAAA,gBAAgB,GAAG,MAAM,CAAC,MAAM,CAAC;IAC5C,gCAAwB;IACxB,kCAA0B;IAC1B,+BAAuB;CACf,CAAC,CAAC;AAEZ,cAAc;AACD,QAAA,WAAW,GAAG,MAAM,CAAC,MAAM,CAAC;IACvC,+BAAuB;IACvB,6BAAqB;IACrB,+BAAuB;IACvB,8BAAsB;IACtB,0BAAkB;IAClB,wBAAgB;IAChB,yBAAiB;IACjB,oCAA4B;IAC5B,mCAA2B;IAC3B,8BAAsB;IACtB,6BAAqB;CACb,CAAC,CAAC;AAEZ,cAAc;AACD,QAAA,eAAe,GAAG,MAAM,CAAC,MAAM,CAAC;IAC3C,sBAAc;IACd,qBAAa;IACb,kCAA0B;IAC1B,wBAAgB;IAChB,uBAAe;IACf,oCAA4B;IAC5B,aAAK;IACL,eAAO;IACP,aAAK;CACG,CAAC,CAAC;AAEZ,cAAc;AACD,QAAA,UAAU,GAAG,MAAM,CAAC,MAAM,CAAC;IACtC,uBAAe;IACf,yBAAiB;IACjB,sBAAc;CACN,CAAC,CAAC;AAEZ;;;GAGG;AACU,QAAA,mBAAmB,GAAG,MAAM,CAAC,MAAM,CAAC;IAC/C,gCAAwB;IACxB,kCAA0B;IAC1B,+BAAuB;IACvB,uBAAe;IACf,yBAAiB;IACjB,sBAAc;IACd,GAAG,mBAAW;CACN,CAAC,CAAC;AAEZ;;;GAGG;AACU,QAAA,mBAAmB,GAAG,MAAM,CAAC,MAAM,CAAC;IAC/C,eAAO;IACP,4BAAoB;IACpB,cAAM;IACN,aAAK;CACG,CAAC,CAAC;AAEZ,cAAc;AACD,QAAA,mBAAmB,GAAG,MAAM,CAAC,MAAM,CAAC;IAC/C,GAAG,mBAAW;IACd,GAAG,kBAAU;IACb,GAAG,uBAAe;IAClB,GAAG,wBAAgB;CACX,CAAC,CAAC;AAEZ;;;GAGG;AACU,QAAA,oBAAoB,GAAG,UAAU,CAAC;AAE/C;;;GAGG;AACU,QAAA,+BAA+B,GAAG,UAAU,CAAC"}

678
node_modules/mongodb/lib/cursor/abstract_cursor.js generated vendored Normal file
View file

@ -0,0 +1,678 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.assertUninitialized = exports.next = exports.AbstractCursor = exports.CURSOR_FLAGS = void 0;
const stream_1 = require("stream");
const util_1 = require("util");
const bson_1 = require("../bson");
const error_1 = require("../error");
const mongo_types_1 = require("../mongo_types");
const execute_operation_1 = require("../operations/execute_operation");
const get_more_1 = require("../operations/get_more");
const kill_cursors_1 = require("../operations/kill_cursors");
const read_concern_1 = require("../read_concern");
const read_preference_1 = require("../read_preference");
const sessions_1 = require("../sessions");
const utils_1 = require("../utils");
/** @internal */
const kId = Symbol('id');
/** @internal */
const kDocuments = Symbol('documents');
/** @internal */
const kServer = Symbol('server');
/** @internal */
const kNamespace = Symbol('namespace');
/** @internal */
const kClient = Symbol('client');
/** @internal */
const kSession = Symbol('session');
/** @internal */
const kOptions = Symbol('options');
/** @internal */
const kTransform = Symbol('transform');
/** @internal */
const kInitialized = Symbol('initialized');
/** @internal */
const kClosed = Symbol('closed');
/** @internal */
const kKilled = Symbol('killed');
/** @internal */
const kInit = Symbol('kInit');
/** @public */
exports.CURSOR_FLAGS = [
'tailable',
'oplogReplay',
'noCursorTimeout',
'awaitData',
'exhaust',
'partial'
];
/** @public */
class AbstractCursor extends mongo_types_1.TypedEventEmitter {
/** @internal */
constructor(client, namespace, options = {}) {
super();
if (!client.s.isMongoClient) {
throw new error_1.MongoRuntimeError('Cursor must be constructed with MongoClient');
}
this[kClient] = client;
this[kNamespace] = namespace;
this[kId] = null;
this[kDocuments] = new utils_1.List();
this[kInitialized] = false;
this[kClosed] = false;
this[kKilled] = false;
this[kOptions] = {
readPreference: options.readPreference && options.readPreference instanceof read_preference_1.ReadPreference
? options.readPreference
: read_preference_1.ReadPreference.primary,
...(0, bson_1.pluckBSONSerializeOptions)(options)
};
const readConcern = read_concern_1.ReadConcern.fromOptions(options);
if (readConcern) {
this[kOptions].readConcern = readConcern;
}
if (typeof options.batchSize === 'number') {
this[kOptions].batchSize = options.batchSize;
}
// we check for undefined specifically here to allow falsy values
// eslint-disable-next-line no-restricted-syntax
if (options.comment !== undefined) {
this[kOptions].comment = options.comment;
}
if (typeof options.maxTimeMS === 'number') {
this[kOptions].maxTimeMS = options.maxTimeMS;
}
if (typeof options.maxAwaitTimeMS === 'number') {
this[kOptions].maxAwaitTimeMS = options.maxAwaitTimeMS;
}
if (options.session instanceof sessions_1.ClientSession) {
this[kSession] = options.session;
}
else {
this[kSession] = this[kClient].startSession({ owner: this, explicit: false });
}
}
get id() {
return this[kId] ?? undefined;
}
/** @internal */
get client() {
return this[kClient];
}
/** @internal */
get server() {
return this[kServer];
}
get namespace() {
return this[kNamespace];
}
get readPreference() {
return this[kOptions].readPreference;
}
get readConcern() {
return this[kOptions].readConcern;
}
/** @internal */
get session() {
return this[kSession];
}
set session(clientSession) {
this[kSession] = clientSession;
}
/** @internal */
get cursorOptions() {
return this[kOptions];
}
get closed() {
return this[kClosed];
}
get killed() {
return this[kKilled];
}
get loadBalanced() {
return !!this[kClient].topology?.loadBalanced;
}
/** Returns current buffered documents length */
bufferedCount() {
return this[kDocuments].length;
}
/** Returns current buffered documents */
readBufferedDocuments(number) {
const bufferedDocs = [];
const documentsToRead = Math.min(number ?? this[kDocuments].length, this[kDocuments].length);
for (let count = 0; count < documentsToRead; count++) {
const document = this[kDocuments].shift();
if (document != null) {
bufferedDocs.push(document);
}
}
return bufferedDocs;
}
async *[Symbol.asyncIterator]() {
if (this.closed) {
return;
}
try {
while (true) {
const document = await this.next();
// Intentional strict null check, because users can map cursors to falsey values.
// We allow mapping to all values except for null.
// eslint-disable-next-line no-restricted-syntax
if (document === null) {
if (!this.closed) {
const message = 'Cursor returned a `null` document, but the cursor is not exhausted. Mapping documents to `null` is not supported in the cursor transform.';
await cleanupCursorAsync(this, { needsToEmitClosed: true }).catch(() => null);
throw new error_1.MongoAPIError(message);
}
break;
}
yield document;
if (this[kId] === bson_1.Long.ZERO) {
// Cursor exhausted
break;
}
}
}
finally {
// Only close the cursor if it has not already been closed. This finally clause handles
// the case when a user would break out of a for await of loop early.
if (!this.closed) {
await this.close().catch(() => null);
}
}
}
stream(options) {
if (options?.transform) {
const transform = options.transform;
const readable = new ReadableCursorStream(this);
return readable.pipe(new stream_1.Transform({
objectMode: true,
highWaterMark: 1,
transform(chunk, _, callback) {
try {
const transformed = transform(chunk);
callback(undefined, transformed);
}
catch (err) {
callback(err);
}
}
}));
}
return new ReadableCursorStream(this);
}
async hasNext() {
if (this[kId] === bson_1.Long.ZERO) {
return false;
}
if (this[kDocuments].length !== 0) {
return true;
}
const doc = await nextAsync(this, true);
if (doc) {
this[kDocuments].unshift(doc);
return true;
}
return false;
}
/** Get the next available document from the cursor, returns null if no more documents are available. */
async next() {
if (this[kId] === bson_1.Long.ZERO) {
throw new error_1.MongoCursorExhaustedError();
}
return nextAsync(this, true);
}
/**
* Try to get the next available document from the cursor or `null` if an empty batch is returned
*/
async tryNext() {
if (this[kId] === bson_1.Long.ZERO) {
throw new error_1.MongoCursorExhaustedError();
}
return nextAsync(this, false);
}
/**
* Iterates over all the documents for this cursor using the iterator, callback pattern.
*
* If the iterator returns `false`, iteration will stop.
*
* @param iterator - The iteration callback.
*/
async forEach(iterator) {
if (typeof iterator !== 'function') {
throw new error_1.MongoInvalidArgumentError('Argument "iterator" must be a function');
}
for await (const document of this) {
const result = iterator(document);
if (result === false) {
break;
}
}
}
async close() {
const needsToEmitClosed = !this[kClosed];
this[kClosed] = true;
await cleanupCursorAsync(this, { needsToEmitClosed });
}
/**
* Returns an array of documents. The caller is responsible for making sure that there
* is enough memory to store the results. Note that the array only contains partial
* results when this cursor had been previously accessed. In that case,
* cursor.rewind() can be used to reset the cursor.
*/
async toArray() {
const array = [];
for await (const document of this) {
array.push(document);
}
return array;
}
/**
* Add a cursor flag to the cursor
*
* @param flag - The flag to set, must be one of following ['tailable', 'oplogReplay', 'noCursorTimeout', 'awaitData', 'partial' -.
* @param value - The flag boolean value.
*/
addCursorFlag(flag, value) {
assertUninitialized(this);
if (!exports.CURSOR_FLAGS.includes(flag)) {
throw new error_1.MongoInvalidArgumentError(`Flag ${flag} is not one of ${exports.CURSOR_FLAGS}`);
}
if (typeof value !== 'boolean') {
throw new error_1.MongoInvalidArgumentError(`Flag ${flag} must be a boolean value`);
}
this[kOptions][flag] = value;
return this;
}
/**
* Map all documents using the provided function
* If there is a transform set on the cursor, that will be called first and the result passed to
* this function's transform.
*
* @remarks
*
* **Note** Cursors use `null` internally to indicate that there are no more documents in the cursor. Providing a mapping
* function that maps values to `null` will result in the cursor closing itself before it has finished iterating
* all documents. This will **not** result in a memory leak, just surprising behavior. For example:
*
* ```typescript
* const cursor = collection.find({});
* cursor.map(() => null);
*
* const documents = await cursor.toArray();
* // documents is always [], regardless of how many documents are in the collection.
* ```
*
* Other falsey values are allowed:
*
* ```typescript
* const cursor = collection.find({});
* cursor.map(() => '');
*
* const documents = await cursor.toArray();
* // documents is now an array of empty strings
* ```
*
* **Note for Typescript Users:** adding a transform changes the return type of the iteration of this cursor,
* it **does not** return a new instance of a cursor. This means when calling map,
* you should always assign the result to a new variable in order to get a correctly typed cursor variable.
* Take note of the following example:
*
* @example
* ```typescript
* const cursor: FindCursor<Document> = coll.find();
* const mappedCursor: FindCursor<number> = cursor.map(doc => Object.keys(doc).length);
* const keyCounts: number[] = await mappedCursor.toArray(); // cursor.toArray() still returns Document[]
* ```
* @param transform - The mapping transformation method.
*/
map(transform) {
assertUninitialized(this);
const oldTransform = this[kTransform]; // TODO(NODE-3283): Improve transform typing
if (oldTransform) {
this[kTransform] = doc => {
return transform(oldTransform(doc));
};
}
else {
this[kTransform] = transform;
}
return this;
}
/**
* Set the ReadPreference for the cursor.
*
* @param readPreference - The new read preference for the cursor.
*/
withReadPreference(readPreference) {
assertUninitialized(this);
if (readPreference instanceof read_preference_1.ReadPreference) {
this[kOptions].readPreference = readPreference;
}
else if (typeof readPreference === 'string') {
this[kOptions].readPreference = read_preference_1.ReadPreference.fromString(readPreference);
}
else {
throw new error_1.MongoInvalidArgumentError(`Invalid read preference: ${readPreference}`);
}
return this;
}
/**
* Set the ReadPreference for the cursor.
*
* @param readPreference - The new read preference for the cursor.
*/
withReadConcern(readConcern) {
assertUninitialized(this);
const resolvedReadConcern = read_concern_1.ReadConcern.fromOptions({ readConcern });
if (resolvedReadConcern) {
this[kOptions].readConcern = resolvedReadConcern;
}
return this;
}
/**
* Set a maxTimeMS on the cursor query, allowing for hard timeout limits on queries (Only supported on MongoDB 2.6 or higher)
*
* @param value - Number of milliseconds to wait before aborting the query.
*/
maxTimeMS(value) {
assertUninitialized(this);
if (typeof value !== 'number') {
throw new error_1.MongoInvalidArgumentError('Argument for maxTimeMS must be a number');
}
this[kOptions].maxTimeMS = value;
return this;
}
/**
* Set the batch size for the cursor.
*
* @param value - The number of documents to return per batch. See {@link https://docs.mongodb.com/manual/reference/command/find/|find command documentation}.
*/
batchSize(value) {
assertUninitialized(this);
if (this[kOptions].tailable) {
throw new error_1.MongoTailableCursorError('Tailable cursor does not support batchSize');
}
if (typeof value !== 'number') {
throw new error_1.MongoInvalidArgumentError('Operation "batchSize" requires an integer');
}
this[kOptions].batchSize = value;
return this;
}
/**
* Rewind this cursor to its uninitialized state. Any options that are present on the cursor will
* remain in effect. Iterating this cursor will cause new queries to be sent to the server, even
* if the resultant data has already been retrieved by this cursor.
*/
rewind() {
if (!this[kInitialized]) {
return;
}
this[kId] = null;
this[kDocuments].clear();
this[kClosed] = false;
this[kKilled] = false;
this[kInitialized] = false;
const session = this[kSession];
if (session) {
// We only want to end this session if we created it, and it hasn't ended yet
if (session.explicit === false) {
if (!session.hasEnded) {
session.endSession().catch(() => null);
}
this[kSession] = this.client.startSession({ owner: this, explicit: false });
}
}
}
/** @internal */
_getMore(batchSize, callback) {
// eslint-disable-next-line @typescript-eslint/no-non-null-assertion
const getMoreOperation = new get_more_1.GetMoreOperation(this[kNamespace], this[kId], this[kServer], {
...this[kOptions],
session: this[kSession],
batchSize
});
(0, execute_operation_1.executeOperation)(this[kClient], getMoreOperation, callback);
}
/**
* @internal
*
* This function is exposed for the unified test runner's createChangeStream
* operation. We cannot refactor to use the abstract _initialize method without
* a significant refactor.
*/
[kInit](callback) {
this._initialize(this[kSession], (error, state) => {
if (state) {
const response = state.response;
this[kServer] = state.server;
if (response.cursor) {
// TODO(NODE-2674): Preserve int64 sent from MongoDB
this[kId] =
typeof response.cursor.id === 'number'
? bson_1.Long.fromNumber(response.cursor.id)
: typeof response.cursor.id === 'bigint'
? bson_1.Long.fromBigInt(response.cursor.id)
: response.cursor.id;
if (response.cursor.ns) {
this[kNamespace] = (0, utils_1.ns)(response.cursor.ns);
}
this[kDocuments].pushMany(response.cursor.firstBatch);
}
// When server responses return without a cursor document, we close this cursor
// and return the raw server response. This is often the case for explain commands
// for example
if (this[kId] == null) {
this[kId] = bson_1.Long.ZERO;
// TODO(NODE-3286): ExecutionResult needs to accept a generic parameter
this[kDocuments].push(state.response);
}
}
// the cursor is now initialized, even if an error occurred or it is dead
this[kInitialized] = true;
if (error) {
return cleanupCursor(this, { error }, () => callback(error, undefined));
}
if (cursorIsDead(this)) {
return cleanupCursor(this, undefined, () => callback());
}
callback();
});
}
}
exports.AbstractCursor = AbstractCursor;
/** @event */
AbstractCursor.CLOSE = 'close';
function nextDocument(cursor) {
const doc = cursor[kDocuments].shift();
if (doc && cursor[kTransform]) {
return cursor[kTransform](doc);
}
return doc;
}
const nextAsync = (0, util_1.promisify)(next);
/**
* @param cursor - the cursor on which to call `next`
* @param blocking - a boolean indicating whether or not the cursor should `block` until data
* is available. Generally, this flag is set to `false` because if the getMore returns no documents,
* the cursor has been exhausted. In certain scenarios (ChangeStreams, tailable await cursors and
* `tryNext`, for example) blocking is necessary because a getMore returning no documents does
* not indicate the end of the cursor.
* @param callback - callback to return the result to the caller
* @returns
*/
function next(cursor, blocking, callback) {
const cursorId = cursor[kId];
if (cursor.closed) {
return callback(undefined, null);
}
if (cursor[kDocuments].length !== 0) {
callback(undefined, nextDocument(cursor));
return;
}
if (cursorId == null) {
// All cursors must operate within a session, one must be made implicitly if not explicitly provided
cursor[kInit](err => {
if (err)
return callback(err);
return next(cursor, blocking, callback);
});
return;
}
if (cursorIsDead(cursor)) {
return cleanupCursor(cursor, undefined, () => callback(undefined, null));
}
// otherwise need to call getMore
const batchSize = cursor[kOptions].batchSize || 1000;
cursor._getMore(batchSize, (error, response) => {
if (response) {
const cursorId = typeof response.cursor.id === 'number'
? bson_1.Long.fromNumber(response.cursor.id)
: typeof response.cursor.id === 'bigint'
? bson_1.Long.fromBigInt(response.cursor.id)
: response.cursor.id;
cursor[kDocuments].pushMany(response.cursor.nextBatch);
cursor[kId] = cursorId;
}
if (error || cursorIsDead(cursor)) {
return cleanupCursor(cursor, { error }, () => callback(error, nextDocument(cursor)));
}
if (cursor[kDocuments].length === 0 && blocking === false) {
return callback(undefined, null);
}
next(cursor, blocking, callback);
});
}
exports.next = next;
function cursorIsDead(cursor) {
const cursorId = cursor[kId];
return !!cursorId && cursorId.isZero();
}
const cleanupCursorAsync = (0, util_1.promisify)(cleanupCursor);
function cleanupCursor(cursor, options, callback) {
const cursorId = cursor[kId];
const cursorNs = cursor[kNamespace];
const server = cursor[kServer];
const session = cursor[kSession];
const error = options?.error;
const needsToEmitClosed = options?.needsToEmitClosed ?? cursor[kDocuments].length === 0;
if (error) {
if (cursor.loadBalanced && error instanceof error_1.MongoNetworkError) {
return completeCleanup();
}
}
if (cursorId == null || server == null || cursorId.isZero() || cursorNs == null) {
if (needsToEmitClosed) {
cursor[kClosed] = true;
cursor[kId] = bson_1.Long.ZERO;
cursor.emit(AbstractCursor.CLOSE);
}
if (session) {
if (session.owner === cursor) {
session.endSession({ error }).finally(() => {
callback();
});
return;
}
if (!session.inTransaction()) {
(0, sessions_1.maybeClearPinnedConnection)(session, { error });
}
}
return callback();
}
function completeCleanup() {
if (session) {
if (session.owner === cursor) {
session.endSession({ error }).finally(() => {
cursor.emit(AbstractCursor.CLOSE);
callback();
});
return;
}
if (!session.inTransaction()) {
(0, sessions_1.maybeClearPinnedConnection)(session, { error });
}
}
cursor.emit(AbstractCursor.CLOSE);
return callback();
}
cursor[kKilled] = true;
if (session.hasEnded) {
return completeCleanup();
}
(0, execute_operation_1.executeOperation)(cursor[kClient], new kill_cursors_1.KillCursorsOperation(cursorId, cursorNs, server, { session }))
.catch(() => null)
.finally(completeCleanup);
}
/** @internal */
function assertUninitialized(cursor) {
if (cursor[kInitialized]) {
throw new error_1.MongoCursorInUseError();
}
}
exports.assertUninitialized = assertUninitialized;
class ReadableCursorStream extends stream_1.Readable {
constructor(cursor) {
super({
objectMode: true,
autoDestroy: false,
highWaterMark: 1
});
this._readInProgress = false;
this._cursor = cursor;
}
// eslint-disable-next-line @typescript-eslint/no-unused-vars
_read(size) {
if (!this._readInProgress) {
this._readInProgress = true;
this._readNext();
}
}
_destroy(error, callback) {
this._cursor.close().then(() => callback(error), closeError => callback(closeError));
}
_readNext() {
next(this._cursor, true, (err, result) => {
if (err) {
// NOTE: This is questionable, but we have a test backing the behavior. It seems the
// desired behavior is that a stream ends cleanly when a user explicitly closes
// a client during iteration. Alternatively, we could do the "right" thing and
// propagate the error message by removing this special case.
if (err.message.match(/server is closed/)) {
this._cursor.close().catch(() => null);
return this.push(null);
}
// NOTE: This is also perhaps questionable. The rationale here is that these errors tend
// to be "operation was interrupted", where a cursor has been closed but there is an
// active getMore in-flight. This used to check if the cursor was killed but once
// that changed to happen in cleanup legitimate errors would not destroy the
// stream. There are change streams test specifically test these cases.
if (err.message.match(/operation was interrupted/)) {
return this.push(null);
}
// NOTE: The two above checks on the message of the error will cause a null to be pushed
// to the stream, thus closing the stream before the destroy call happens. This means
// that either of those error messages on a change stream will not get a proper
// 'error' event to be emitted (the error passed to destroy). Change stream resumability
// relies on that error event to be emitted to create its new cursor and thus was not
// working on 4.4 servers because the error emitted on failover was "interrupted at
// shutdown" while on 5.0+ it is "The server is in quiesce mode and will shut down".
// See NODE-4475.
return this.destroy(err);
}
if (result == null) {
this.push(null);
}
else if (this.destroyed) {
this._cursor.close().catch(() => null);
}
else {
if (this.push(result)) {
return this._readNext();
}
this._readInProgress = false;
}
});
}
}
//# sourceMappingURL=abstract_cursor.js.map

File diff suppressed because one or more lines are too long

168
node_modules/mongodb/lib/cursor/aggregation_cursor.js generated vendored Normal file
View file

@ -0,0 +1,168 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.AggregationCursor = void 0;
const aggregate_1 = require("../operations/aggregate");
const execute_operation_1 = require("../operations/execute_operation");
const utils_1 = require("../utils");
const abstract_cursor_1 = require("./abstract_cursor");
/** @internal */
const kPipeline = Symbol('pipeline');
/** @internal */
const kOptions = Symbol('options');
/**
* The **AggregationCursor** class is an internal class that embodies an aggregation cursor on MongoDB
* allowing for iteration over the results returned from the underlying query. It supports
* one by one document iteration, conversion to an array or can be iterated as a Node 4.X
* or higher stream
* @public
*/
class AggregationCursor extends abstract_cursor_1.AbstractCursor {
/** @internal */
constructor(client, namespace, pipeline = [], options = {}) {
super(client, namespace, options);
this[kPipeline] = pipeline;
this[kOptions] = options;
}
get pipeline() {
return this[kPipeline];
}
clone() {
const clonedOptions = (0, utils_1.mergeOptions)({}, this[kOptions]);
delete clonedOptions.session;
return new AggregationCursor(this.client, this.namespace, this[kPipeline], {
...clonedOptions
});
}
map(transform) {
return super.map(transform);
}
/** @internal */
_initialize(session, callback) {
const aggregateOperation = new aggregate_1.AggregateOperation(this.namespace, this[kPipeline], {
...this[kOptions],
...this.cursorOptions,
session
});
(0, execute_operation_1.executeOperation)(this.client, aggregateOperation, (err, response) => {
if (err || response == null)
return callback(err);
// TODO: NODE-2882
callback(undefined, { server: aggregateOperation.server, session, response });
});
}
/** Execute the explain for the cursor */
async explain(verbosity) {
return (0, execute_operation_1.executeOperation)(this.client, new aggregate_1.AggregateOperation(this.namespace, this[kPipeline], {
...this[kOptions],
...this.cursorOptions,
explain: verbosity ?? true
}));
}
group($group) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kPipeline].push({ $group });
return this;
}
/** Add a limit stage to the aggregation pipeline */
limit($limit) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kPipeline].push({ $limit });
return this;
}
/** Add a match stage to the aggregation pipeline */
match($match) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kPipeline].push({ $match });
return this;
}
/** Add an out stage to the aggregation pipeline */
out($out) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kPipeline].push({ $out });
return this;
}
/**
* Add a project stage to the aggregation pipeline
*
* @remarks
* In order to strictly type this function you must provide an interface
* that represents the effect of your projection on the result documents.
*
* By default chaining a projection to your cursor changes the returned type to the generic {@link Document} type.
* You should specify a parameterized type to have assertions on your final results.
*
* @example
* ```typescript
* // Best way
* const docs: AggregationCursor<{ a: number }> = cursor.project<{ a: number }>({ _id: 0, a: true });
* // Flexible way
* const docs: AggregationCursor<Document> = cursor.project({ _id: 0, a: true });
* ```
*
* @remarks
* In order to strictly type this function you must provide an interface
* that represents the effect of your projection on the result documents.
*
* **Note for Typescript Users:** adding a transform changes the return type of the iteration of this cursor,
* it **does not** return a new instance of a cursor. This means when calling project,
* you should always assign the result to a new variable in order to get a correctly typed cursor variable.
* Take note of the following example:
*
* @example
* ```typescript
* const cursor: AggregationCursor<{ a: number; b: string }> = coll.aggregate([]);
* const projectCursor = cursor.project<{ a: number }>({ _id: 0, a: true });
* const aPropOnlyArray: {a: number}[] = await projectCursor.toArray();
*
* // or always use chaining and save the final cursor
*
* const cursor = coll.aggregate().project<{ a: string }>({
* _id: 0,
* a: { $convert: { input: '$a', to: 'string' }
* }});
* ```
*/
project($project) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kPipeline].push({ $project });
return this;
}
/** Add a lookup stage to the aggregation pipeline */
lookup($lookup) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kPipeline].push({ $lookup });
return this;
}
/** Add a redact stage to the aggregation pipeline */
redact($redact) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kPipeline].push({ $redact });
return this;
}
/** Add a skip stage to the aggregation pipeline */
skip($skip) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kPipeline].push({ $skip });
return this;
}
/** Add a sort stage to the aggregation pipeline */
sort($sort) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kPipeline].push({ $sort });
return this;
}
/** Add a unwind stage to the aggregation pipeline */
unwind($unwind) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kPipeline].push({ $unwind });
return this;
}
/** Add a geoNear stage to the aggregation pipeline */
geoNear($geoNear) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kPipeline].push({ $geoNear });
return this;
}
}
exports.AggregationCursor = AggregationCursor;
//# sourceMappingURL=aggregation_cursor.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"aggregation_cursor.js","sourceRoot":"","sources":["../../src/cursor/aggregation_cursor.ts"],"names":[],"mappings":";;;AAGA,uDAA+E;AAC/E,uEAAoF;AAIpF,oCAAwC;AAExC,uDAAwE;AAKxE,gBAAgB;AAChB,MAAM,SAAS,GAAG,MAAM,CAAC,UAAU,CAAC,CAAC;AACrC,gBAAgB;AAChB,MAAM,QAAQ,GAAG,MAAM,CAAC,SAAS,CAAC,CAAC;AAEnC;;;;;;GAMG;AACH,MAAa,iBAAiC,SAAQ,gCAAuB;IAM3E,gBAAgB;IAChB,YACE,MAAmB,EACnB,SAA2B,EAC3B,WAAuB,EAAE,EACzB,UAA4B,EAAE;QAE9B,KAAK,CAAC,MAAM,EAAE,SAAS,EAAE,OAAO,CAAC,CAAC;QAElC,IAAI,CAAC,SAAS,CAAC,GAAG,QAAQ,CAAC;QAC3B,IAAI,CAAC,QAAQ,CAAC,GAAG,OAAO,CAAC;IAC3B,CAAC;IAED,IAAI,QAAQ;QACV,OAAO,IAAI,CAAC,SAAS,CAAC,CAAC;IACzB,CAAC;IAED,KAAK;QACH,MAAM,aAAa,GAAG,IAAA,oBAAY,EAAC,EAAE,EAAE,IAAI,CAAC,QAAQ,CAAC,CAAC,CAAC;QACvD,OAAO,aAAa,CAAC,OAAO,CAAC;QAC7B,OAAO,IAAI,iBAAiB,CAAC,IAAI,CAAC,MAAM,EAAE,IAAI,CAAC,SAAS,EAAE,IAAI,CAAC,SAAS,CAAC,EAAE;YACzE,GAAG,aAAa;SACjB,CAAC,CAAC;IACL,CAAC;IAEQ,GAAG,CAAI,SAA8B;QAC5C,OAAO,KAAK,CAAC,GAAG,CAAC,SAAS,CAAyB,CAAC;IACtD,CAAC;IAED,gBAAgB;IAChB,WAAW,CAAC,OAAsB,EAAE,QAAmC;QACrE,MAAM,kBAAkB,GAAG,IAAI,8BAAkB,CAAC,IAAI,CAAC,SAAS,EAAE,IAAI,CAAC,SAAS,CAAC,EAAE;YACjF,GAAG,IAAI,CAAC,QAAQ,CAAC;YACjB,GAAG,IAAI,CAAC,aAAa;YACrB,OAAO;SACR,CAAC,CAAC;QAEH,IAAA,oCAAgB,EAAC,IAAI,CAAC,MAAM,EAAE,kBAAkB,EAAE,CAAC,GAAG,EAAE,QAAQ,EAAE,EAAE;YAClE,IAAI,GAAG,IAAI,QAAQ,IAAI,IAAI;gBAAE,OAAO,QAAQ,CAAC,GAAG,CAAC,CAAC;YAElD,kBAAkB;YAClB,QAAQ,CAAC,SAAS,EAAE,EAAE,MAAM,EAAE,kBAAkB,CAAC,MAAM,EAAE,OAAO,EAAE,QAAQ,EAAE,CAAC,CAAC;QAChF,CAAC,CAAC,CAAC;IACL,CAAC;IAED,yCAAyC;IACzC,KAAK,CAAC,OAAO,CAAC,SAAgC;QAC5C,OAAO,IAAA,oCAAgB,EACrB,IAAI,CAAC,MAAM,EACX,IAAI,8BAAkB,CAAC,IAAI,CAAC,SAAS,EAAE,IAAI,CAAC,SAAS,CAAC,EAAE;YACtD,GAAG,IAAI,CAAC,QAAQ,CAAC;YACjB,GAAG,IAAI,CAAC,aAAa;YACrB,OAAO,EAAE,SAAS,IAAI,IAAI;SAC3B,CAAC,CACH,CAAC;IACJ,CAAC;IAID,KAAK,CAAC,MAAgB;QACpB,IAAA,qCAAmB,EAAC,IAAI,CAAC,CAAC;QAC1B,IAAI,CAAC,SAAS,CAAC,CAAC,IAAI,CAAC,EAAE,MAAM,EAAE,CAAC,CAAC;QACjC,OAAO,IAAI,CAAC;IACd,CAAC;IAED,oDAAoD;IACpD,KAAK,CAAC,MAAc;QAClB,IAAA,qCAAmB,EAAC,IAAI,CAAC,CAAC;QAC1B,IAAI,CAAC,SAAS,CAAC,CAAC,IAAI,CAAC,EAAE,MAAM,EAAE,CAAC,CAAC;QACjC,OAAO,IAAI,CAAC;IACd,CAAC;IAED,oDAAoD;IACpD,KAAK,CAAC,MAAgB;QACpB,IAAA,qCAAmB,EAAC,IAAI,CAAC,CAAC;QAC1B,IAAI,CAAC,SAAS,CAAC,CAAC,IAAI,CAAC,EAAE,MAAM,EAAE,CAAC,CAAC;QACjC,OAAO,IAAI,CAAC;IACd,CAAC;IAED,mDAAmD;IACnD,GAAG,CAAC,IAA2C;QAC7C,IAAA,qCAAmB,EAAC,IAAI,CAAC,CAAC;QAC1B,IAAI,CAAC,SAAS,CAAC,CAAC,IAAI,CAAC,EAAE,IAAI,EAAE,CAAC,CAAC;QAC/B,OAAO,IAAI,CAAC;IACd,CAAC;IAED;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;OAwCG;IACH,OAAO,CAAgC,QAAkB;QACvD,IAAA,qCAAmB,EAAC,IAAI,CAAC,CAAC;QAC1B,IAAI,CAAC,SAAS,CAAC,CAAC,IAAI,CAAC,EAAE,QAAQ,EAAE,CAAC,CAAC;QACnC,OAAO,IAAuC,CAAC;IACjD,CAAC;IAED,qDAAqD;IACrD,MAAM,CAAC,OAAiB;QACtB,IAAA,qCAAmB,EAAC,IAAI,CAAC,CAAC;QAC1B,IAAI,CAAC,SAAS,CAAC,CAAC,IAAI,CAAC,EAAE,OAAO,EAAE,CAAC,CAAC;QAClC,OAAO,IAAI,CAAC;IACd,CAAC;IAED,qDAAqD;IACrD,MAAM,CAAC,OAAiB;QACtB,IAAA,qCAAmB,EAAC,IAAI,CAAC,CAAC;QAC1B,IAAI,CAAC,SAAS,CAAC,CAAC,IAAI,CAAC,EAAE,OAAO,EAAE,CAAC,CAAC;QAClC,OAAO,IAAI,CAAC;IACd,CAAC;IAED,mDAAmD;IACnD,IAAI,CAAC,KAAa;QAChB,IAAA,qCAAmB,EAAC,IAAI,CAAC,CAAC;QAC1B,IAAI,CAAC,SAAS,CAAC,CAAC,IAAI,CAAC,EAAE,KAAK,EAAE,CAAC,CAAC;QAChC,OAAO,IAAI,CAAC;IACd,CAAC;IAED,mDAAmD;IACnD,IAAI,CAAC,KAAW;QACd,IAAA,qCAAmB,EAAC,IAAI,CAAC,CAAC;QAC1B,IAAI,CAAC,SAAS,CAAC,CAAC,IAAI,CAAC,EAAE,KAAK,EAAE,CAAC,CAAC;QAChC,OAAO,IAAI,CAAC;IACd,CAAC;IAED,qDAAqD;IACrD,MAAM,CAAC,OAA0B;QAC/B,IAAA,qCAAmB,EAAC,IAAI,CAAC,CAAC;QAC1B,IAAI,CAAC,SAAS,CAAC,CAAC,IAAI,CAAC,EAAE,OAAO,EAAE,CAAC,CAAC;QAClC,OAAO,IAAI,CAAC;IACd,CAAC;IAED,sDAAsD;IACtD,OAAO,CAAC,QAAkB;QACxB,IAAA,qCAAmB,EAAC,IAAI,CAAC,CAAC;QAC1B,IAAI,CAAC,SAAS,CAAC,CAAC,IAAI,CAAC,EAAE,QAAQ,EAAE,CAAC,CAAC;QACnC,OAAO,IAAI,CAAC;IACd,CAAC;CACF;AApLD,8CAoLC"}

115
node_modules/mongodb/lib/cursor/change_stream_cursor.js generated vendored Normal file
View file

@ -0,0 +1,115 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.ChangeStreamCursor = void 0;
const change_stream_1 = require("../change_stream");
const constants_1 = require("../constants");
const aggregate_1 = require("../operations/aggregate");
const execute_operation_1 = require("../operations/execute_operation");
const utils_1 = require("../utils");
const abstract_cursor_1 = require("./abstract_cursor");
/** @internal */
class ChangeStreamCursor extends abstract_cursor_1.AbstractCursor {
constructor(client, namespace, pipeline = [], options = {}) {
super(client, namespace, options);
this.pipeline = pipeline;
this.options = options;
this._resumeToken = null;
this.startAtOperationTime = options.startAtOperationTime;
if (options.startAfter) {
this.resumeToken = options.startAfter;
}
else if (options.resumeAfter) {
this.resumeToken = options.resumeAfter;
}
}
set resumeToken(token) {
this._resumeToken = token;
this.emit(change_stream_1.ChangeStream.RESUME_TOKEN_CHANGED, token);
}
get resumeToken() {
return this._resumeToken;
}
get resumeOptions() {
const options = {
...this.options
};
for (const key of ['resumeAfter', 'startAfter', 'startAtOperationTime']) {
delete options[key];
}
if (this.resumeToken != null) {
if (this.options.startAfter && !this.hasReceived) {
options.startAfter = this.resumeToken;
}
else {
options.resumeAfter = this.resumeToken;
}
}
else if (this.startAtOperationTime != null && (0, utils_1.maxWireVersion)(this.server) >= 7) {
options.startAtOperationTime = this.startAtOperationTime;
}
return options;
}
cacheResumeToken(resumeToken) {
if (this.bufferedCount() === 0 && this.postBatchResumeToken) {
this.resumeToken = this.postBatchResumeToken;
}
else {
this.resumeToken = resumeToken;
}
this.hasReceived = true;
}
_processBatch(response) {
const cursor = response.cursor;
if (cursor.postBatchResumeToken) {
this.postBatchResumeToken = response.cursor.postBatchResumeToken;
const batch = 'firstBatch' in response.cursor ? response.cursor.firstBatch : response.cursor.nextBatch;
if (batch.length === 0) {
this.resumeToken = cursor.postBatchResumeToken;
}
}
}
clone() {
return new ChangeStreamCursor(this.client, this.namespace, this.pipeline, {
...this.cursorOptions
});
}
_initialize(session, callback) {
const aggregateOperation = new aggregate_1.AggregateOperation(this.namespace, this.pipeline, {
...this.cursorOptions,
...this.options,
session
});
(0, execute_operation_1.executeOperation)(session.client, aggregateOperation, (err, response) => {
if (err || response == null) {
return callback(err);
}
const server = aggregateOperation.server;
this.maxWireVersion = (0, utils_1.maxWireVersion)(server);
if (this.startAtOperationTime == null &&
this.resumeAfter == null &&
this.startAfter == null &&
this.maxWireVersion >= 7) {
this.startAtOperationTime = response.operationTime;
}
this._processBatch(response);
this.emit(constants_1.INIT, response);
this.emit(constants_1.RESPONSE);
// TODO: NODE-2882
callback(undefined, { server, session, response });
});
}
_getMore(batchSize, callback) {
super._getMore(batchSize, (err, response) => {
if (err) {
return callback(err);
}
this.maxWireVersion = (0, utils_1.maxWireVersion)(this.server);
this._processBatch(response);
this.emit(change_stream_1.ChangeStream.MORE, response);
this.emit(change_stream_1.ChangeStream.RESPONSE);
callback(err, response);
});
}
}
exports.ChangeStreamCursor = ChangeStreamCursor;
//# sourceMappingURL=change_stream_cursor.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"change_stream_cursor.js","sourceRoot":"","sources":["../../src/cursor/change_stream_cursor.ts"],"names":[],"mappings":";;;AACA,oDAM0B;AAC1B,4CAA8C;AAG9C,uDAA6D;AAE7D,uEAAyF;AAEzF,oCAAgF;AAChF,uDAA+E;AAwB/E,gBAAgB;AAChB,MAAa,kBAGX,SAAQ,gCAA2C;IAkBnD,YACE,MAAmB,EACnB,SAA2B,EAC3B,WAAuB,EAAE,EACzB,UAAqC,EAAE;QAEvC,KAAK,CAAC,MAAM,EAAE,SAAS,EAAE,OAAO,CAAC,CAAC;QAElC,IAAI,CAAC,QAAQ,GAAG,QAAQ,CAAC;QACzB,IAAI,CAAC,OAAO,GAAG,OAAO,CAAC;QACvB,IAAI,CAAC,YAAY,GAAG,IAAI,CAAC;QACzB,IAAI,CAAC,oBAAoB,GAAG,OAAO,CAAC,oBAAoB,CAAC;QAEzD,IAAI,OAAO,CAAC,UAAU,EAAE;YACtB,IAAI,CAAC,WAAW,GAAG,OAAO,CAAC,UAAU,CAAC;SACvC;aAAM,IAAI,OAAO,CAAC,WAAW,EAAE;YAC9B,IAAI,CAAC,WAAW,GAAG,OAAO,CAAC,WAAW,CAAC;SACxC;IACH,CAAC;IAED,IAAI,WAAW,CAAC,KAAkB;QAChC,IAAI,CAAC,YAAY,GAAG,KAAK,CAAC;QAC1B,IAAI,CAAC,IAAI,CAAC,4BAAY,CAAC,oBAAoB,EAAE,KAAK,CAAC,CAAC;IACtD,CAAC;IAED,IAAI,WAAW;QACb,OAAO,IAAI,CAAC,YAAY,CAAC;IAC3B,CAAC;IAED,IAAI,aAAa;QACf,MAAM,OAAO,GAA8B;YACzC,GAAG,IAAI,CAAC,OAAO;SAChB,CAAC;QAEF,KAAK,MAAM,GAAG,IAAI,CAAC,aAAa,EAAE,YAAY,EAAE,sBAAsB,CAAU,EAAE;YAChF,OAAO,OAAO,CAAC,GAAG,CAAC,CAAC;SACrB;QAED,IAAI,IAAI,CAAC,WAAW,IAAI,IAAI,EAAE;YAC5B,IAAI,IAAI,CAAC,OAAO,CAAC,UAAU,IAAI,CAAC,IAAI,CAAC,WAAW,EAAE;gBAChD,OAAO,CAAC,UAAU,GAAG,IAAI,CAAC,WAAW,CAAC;aACvC;iBAAM;gBACL,OAAO,CAAC,WAAW,GAAG,IAAI,CAAC,WAAW,CAAC;aACxC;SACF;aAAM,IAAI,IAAI,CAAC,oBAAoB,IAAI,IAAI,IAAI,IAAA,sBAAc,EAAC,IAAI,CAAC,MAAM,CAAC,IAAI,CAAC,EAAE;YAChF,OAAO,CAAC,oBAAoB,GAAG,IAAI,CAAC,oBAAoB,CAAC;SAC1D;QAED,OAAO,OAAO,CAAC;IACjB,CAAC;IAED,gBAAgB,CAAC,WAAwB;QACvC,IAAI,IAAI,CAAC,aAAa,EAAE,KAAK,CAAC,IAAI,IAAI,CAAC,oBAAoB,EAAE;YAC3D,IAAI,CAAC,WAAW,GAAG,IAAI,CAAC,oBAAoB,CAAC;SAC9C;aAAM;YACL,IAAI,CAAC,WAAW,GAAG,WAAW,CAAC;SAChC;QACD,IAAI,CAAC,WAAW,GAAG,IAAI,CAAC;IAC1B,CAAC;IAED,aAAa,CAAC,QAAiD;QAC7D,MAAM,MAAM,GAAG,QAAQ,CAAC,MAAM,CAAC;QAC/B,IAAI,MAAM,CAAC,oBAAoB,EAAE;YAC/B,IAAI,CAAC,oBAAoB,GAAG,QAAQ,CAAC,MAAM,CAAC,oBAAoB,CAAC;YAEjE,MAAM,KAAK,GACT,YAAY,IAAI,QAAQ,CAAC,MAAM,CAAC,CAAC,CAAC,QAAQ,CAAC,MAAM,CAAC,UAAU,CAAC,CAAC,CAAC,QAAQ,CAAC,MAAM,CAAC,SAAS,CAAC;YAC3F,IAAI,KAAK,CAAC,MAAM,KAAK,CAAC,EAAE;gBACtB,IAAI,CAAC,WAAW,GAAG,MAAM,CAAC,oBAAoB,CAAC;aAChD;SACF;IACH,CAAC;IAED,KAAK;QACH,OAAO,IAAI,kBAAkB,CAAC,IAAI,CAAC,MAAM,EAAE,IAAI,CAAC,SAAS,EAAE,IAAI,CAAC,QAAQ,EAAE;YACxE,GAAG,IAAI,CAAC,aAAa;SACtB,CAAC,CAAC;IACL,CAAC;IAED,WAAW,CAAC,OAAsB,EAAE,QAAmC;QACrE,MAAM,kBAAkB,GAAG,IAAI,8BAAkB,CAAC,IAAI,CAAC,SAAS,EAAE,IAAI,CAAC,QAAQ,EAAE;YAC/E,GAAG,IAAI,CAAC,aAAa;YACrB,GAAG,IAAI,CAAC,OAAO;YACf,OAAO;SACR,CAAC,CAAC;QAEH,IAAA,oCAAgB,EACd,OAAO,CAAC,MAAM,EACd,kBAAkB,EAClB,CAAC,GAAG,EAAE,QAAQ,EAAE,EAAE;YAChB,IAAI,GAAG,IAAI,QAAQ,IAAI,IAAI,EAAE;gBAC3B,OAAO,QAAQ,CAAC,GAAG,CAAC,CAAC;aACtB;YAED,MAAM,MAAM,GAAG,kBAAkB,CAAC,MAAM,CAAC;YACzC,IAAI,CAAC,cAAc,GAAG,IAAA,sBAAc,EAAC,MAAM,CAAC,CAAC;YAE7C,IACE,IAAI,CAAC,oBAAoB,IAAI,IAAI;gBACjC,IAAI,CAAC,WAAW,IAAI,IAAI;gBACxB,IAAI,CAAC,UAAU,IAAI,IAAI;gBACvB,IAAI,CAAC,cAAc,IAAI,CAAC,EACxB;gBACA,IAAI,CAAC,oBAAoB,GAAG,QAAQ,CAAC,aAAa,CAAC;aACpD;YAED,IAAI,CAAC,aAAa,CAAC,QAAQ,CAAC,CAAC;YAE7B,IAAI,CAAC,IAAI,CAAC,gBAAI,EAAE,QAAQ,CAAC,CAAC;YAC1B,IAAI,CAAC,IAAI,CAAC,oBAAQ,CAAC,CAAC;YAEpB,kBAAkB;YAClB,QAAQ,CAAC,SAAS,EAAE,EAAE,MAAM,EAAE,OAAO,EAAE,QAAQ,EAAE,CAAC,CAAC;QACrD,CAAC,CACF,CAAC;IACJ,CAAC;IAEQ,QAAQ,CAAC,SAAiB,EAAE,QAAkB;QACrD,KAAK,CAAC,QAAQ,CAAC,SAAS,EAAE,CAAC,GAAG,EAAE,QAAQ,EAAE,EAAE;YAC1C,IAAI,GAAG,EAAE;gBACP,OAAO,QAAQ,CAAC,GAAG,CAAC,CAAC;aACtB;YAED,IAAI,CAAC,cAAc,GAAG,IAAA,sBAAc,EAAC,IAAI,CAAC,MAAM,CAAC,CAAC;YAClD,IAAI,CAAC,aAAa,CAAC,QAAqE,CAAC,CAAC;YAE1F,IAAI,CAAC,IAAI,CAAC,4BAAY,CAAC,IAAI,EAAE,QAAQ,CAAC,CAAC;YACvC,IAAI,CAAC,IAAI,CAAC,4BAAY,CAAC,QAAQ,CAAC,CAAC;YACjC,QAAQ,CAAC,GAAG,EAAE,QAAQ,CAAC,CAAC;QAC1B,CAAC,CAAC,CAAC;IACL,CAAC;CACF;AAxJD,gDAwJC"}

381
node_modules/mongodb/lib/cursor/find_cursor.js generated vendored Normal file
View file

@ -0,0 +1,381 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.FindCursor = exports.FLAGS = void 0;
const error_1 = require("../error");
const count_1 = require("../operations/count");
const execute_operation_1 = require("../operations/execute_operation");
const find_1 = require("../operations/find");
const sort_1 = require("../sort");
const utils_1 = require("../utils");
const abstract_cursor_1 = require("./abstract_cursor");
/** @internal */
const kFilter = Symbol('filter');
/** @internal */
const kNumReturned = Symbol('numReturned');
/** @internal */
const kBuiltOptions = Symbol('builtOptions');
/** @public Flags allowed for cursor */
exports.FLAGS = [
'tailable',
'oplogReplay',
'noCursorTimeout',
'awaitData',
'exhaust',
'partial'
];
/** @public */
class FindCursor extends abstract_cursor_1.AbstractCursor {
/** @internal */
constructor(client, namespace, filter = {}, options = {}) {
super(client, namespace, options);
this[kFilter] = filter;
this[kBuiltOptions] = options;
if (options.sort != null) {
this[kBuiltOptions].sort = (0, sort_1.formatSort)(options.sort);
}
}
clone() {
const clonedOptions = (0, utils_1.mergeOptions)({}, this[kBuiltOptions]);
delete clonedOptions.session;
return new FindCursor(this.client, this.namespace, this[kFilter], {
...clonedOptions
});
}
map(transform) {
return super.map(transform);
}
/** @internal */
_initialize(session, callback) {
const findOperation = new find_1.FindOperation(undefined, this.namespace, this[kFilter], {
...this[kBuiltOptions],
...this.cursorOptions,
session
});
(0, execute_operation_1.executeOperation)(this.client, findOperation, (err, response) => {
if (err || response == null)
return callback(err);
// TODO: We only need this for legacy queries that do not support `limit`, maybe
// the value should only be saved in those cases.
if (response.cursor) {
this[kNumReturned] = response.cursor.firstBatch.length;
}
else {
this[kNumReturned] = response.documents ? response.documents.length : 0;
}
// TODO: NODE-2882
callback(undefined, { server: findOperation.server, session, response });
});
}
/** @internal */
_getMore(batchSize, callback) {
// NOTE: this is to support client provided limits in pre-command servers
const numReturned = this[kNumReturned];
if (numReturned) {
const limit = this[kBuiltOptions].limit;
batchSize =
limit && limit > 0 && numReturned + batchSize > limit ? limit - numReturned : batchSize;
if (batchSize <= 0) {
this.close().finally(() => callback());
return;
}
}
super._getMore(batchSize, (err, response) => {
if (err)
return callback(err);
// TODO: wrap this in some logic to prevent it from happening if we don't need this support
if (response) {
this[kNumReturned] = this[kNumReturned] + response.cursor.nextBatch.length;
}
callback(undefined, response);
});
}
/**
* Get the count of documents for this cursor
* @deprecated Use `collection.estimatedDocumentCount` or `collection.countDocuments` instead
*/
async count(options) {
(0, utils_1.emitWarningOnce)('cursor.count is deprecated and will be removed in the next major version, please use `collection.estimatedDocumentCount` or `collection.countDocuments` instead ');
if (typeof options === 'boolean') {
throw new error_1.MongoInvalidArgumentError('Invalid first parameter to count');
}
return (0, execute_operation_1.executeOperation)(this.client, new count_1.CountOperation(this.namespace, this[kFilter], {
...this[kBuiltOptions],
...this.cursorOptions,
...options
}));
}
/** Execute the explain for the cursor */
async explain(verbosity) {
return (0, execute_operation_1.executeOperation)(this.client, new find_1.FindOperation(undefined, this.namespace, this[kFilter], {
...this[kBuiltOptions],
...this.cursorOptions,
explain: verbosity ?? true
}));
}
/** Set the cursor query */
filter(filter) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kFilter] = filter;
return this;
}
/**
* Set the cursor hint
*
* @param hint - If specified, then the query system will only consider plans using the hinted index.
*/
hint(hint) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kBuiltOptions].hint = hint;
return this;
}
/**
* Set the cursor min
*
* @param min - Specify a $min value to specify the inclusive lower bound for a specific index in order to constrain the results of find(). The $min specifies the lower bound for all keys of a specific index in order.
*/
min(min) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kBuiltOptions].min = min;
return this;
}
/**
* Set the cursor max
*
* @param max - Specify a $max value to specify the exclusive upper bound for a specific index in order to constrain the results of find(). The $max specifies the upper bound for all keys of a specific index in order.
*/
max(max) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kBuiltOptions].max = max;
return this;
}
/**
* Set the cursor returnKey.
* If set to true, modifies the cursor to only return the index field or fields for the results of the query, rather than documents.
* If set to true and the query does not use an index to perform the read operation, the returned documents will not contain any fields.
*
* @param value - the returnKey value.
*/
returnKey(value) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kBuiltOptions].returnKey = value;
return this;
}
/**
* Modifies the output of a query by adding a field $recordId to matching documents. $recordId is the internal key which uniquely identifies a document in a collection.
*
* @param value - The $showDiskLoc option has now been deprecated and replaced with the showRecordId field. $showDiskLoc will still be accepted for OP_QUERY stye find.
*/
showRecordId(value) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kBuiltOptions].showRecordId = value;
return this;
}
/**
* Add a query modifier to the cursor query
*
* @param name - The query modifier (must start with $, such as $orderby etc)
* @param value - The modifier value.
*/
addQueryModifier(name, value) {
(0, abstract_cursor_1.assertUninitialized)(this);
if (name[0] !== '$') {
throw new error_1.MongoInvalidArgumentError(`${name} is not a valid query modifier`);
}
// Strip of the $
const field = name.substr(1);
// NOTE: consider some TS magic for this
switch (field) {
case 'comment':
this[kBuiltOptions].comment = value;
break;
case 'explain':
this[kBuiltOptions].explain = value;
break;
case 'hint':
this[kBuiltOptions].hint = value;
break;
case 'max':
this[kBuiltOptions].max = value;
break;
case 'maxTimeMS':
this[kBuiltOptions].maxTimeMS = value;
break;
case 'min':
this[kBuiltOptions].min = value;
break;
case 'orderby':
this[kBuiltOptions].sort = (0, sort_1.formatSort)(value);
break;
case 'query':
this[kFilter] = value;
break;
case 'returnKey':
this[kBuiltOptions].returnKey = value;
break;
case 'showDiskLoc':
this[kBuiltOptions].showRecordId = value;
break;
default:
throw new error_1.MongoInvalidArgumentError(`Invalid query modifier: ${name}`);
}
return this;
}
/**
* Add a comment to the cursor query allowing for tracking the comment in the log.
*
* @param value - The comment attached to this query.
*/
comment(value) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kBuiltOptions].comment = value;
return this;
}
/**
* Set a maxAwaitTimeMS on a tailing cursor query to allow to customize the timeout value for the option awaitData (Only supported on MongoDB 3.2 or higher, ignored otherwise)
*
* @param value - Number of milliseconds to wait before aborting the tailed query.
*/
maxAwaitTimeMS(value) {
(0, abstract_cursor_1.assertUninitialized)(this);
if (typeof value !== 'number') {
throw new error_1.MongoInvalidArgumentError('Argument for maxAwaitTimeMS must be a number');
}
this[kBuiltOptions].maxAwaitTimeMS = value;
return this;
}
/**
* Set a maxTimeMS on the cursor query, allowing for hard timeout limits on queries (Only supported on MongoDB 2.6 or higher)
*
* @param value - Number of milliseconds to wait before aborting the query.
*/
maxTimeMS(value) {
(0, abstract_cursor_1.assertUninitialized)(this);
if (typeof value !== 'number') {
throw new error_1.MongoInvalidArgumentError('Argument for maxTimeMS must be a number');
}
this[kBuiltOptions].maxTimeMS = value;
return this;
}
/**
* Add a project stage to the aggregation pipeline
*
* @remarks
* In order to strictly type this function you must provide an interface
* that represents the effect of your projection on the result documents.
*
* By default chaining a projection to your cursor changes the returned type to the generic
* {@link Document} type.
* You should specify a parameterized type to have assertions on your final results.
*
* @example
* ```typescript
* // Best way
* const docs: FindCursor<{ a: number }> = cursor.project<{ a: number }>({ _id: 0, a: true });
* // Flexible way
* const docs: FindCursor<Document> = cursor.project({ _id: 0, a: true });
* ```
*
* @remarks
*
* **Note for Typescript Users:** adding a transform changes the return type of the iteration of this cursor,
* it **does not** return a new instance of a cursor. This means when calling project,
* you should always assign the result to a new variable in order to get a correctly typed cursor variable.
* Take note of the following example:
*
* @example
* ```typescript
* const cursor: FindCursor<{ a: number; b: string }> = coll.find();
* const projectCursor = cursor.project<{ a: number }>({ _id: 0, a: true });
* const aPropOnlyArray: {a: number}[] = await projectCursor.toArray();
*
* // or always use chaining and save the final cursor
*
* const cursor = coll.find().project<{ a: string }>({
* _id: 0,
* a: { $convert: { input: '$a', to: 'string' }
* }});
* ```
*/
project(value) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kBuiltOptions].projection = value;
return this;
}
/**
* Sets the sort order of the cursor query.
*
* @param sort - The key or keys set for the sort.
* @param direction - The direction of the sorting (1 or -1).
*/
sort(sort, direction) {
(0, abstract_cursor_1.assertUninitialized)(this);
if (this[kBuiltOptions].tailable) {
throw new error_1.MongoTailableCursorError('Tailable cursor does not support sorting');
}
this[kBuiltOptions].sort = (0, sort_1.formatSort)(sort, direction);
return this;
}
/**
* Allows disk use for blocking sort operations exceeding 100MB memory. (MongoDB 3.2 or higher)
*
* @remarks
* {@link https://docs.mongodb.com/manual/reference/command/find/#find-cmd-allowdiskuse | find command allowDiskUse documentation}
*/
allowDiskUse(allow = true) {
(0, abstract_cursor_1.assertUninitialized)(this);
if (!this[kBuiltOptions].sort) {
throw new error_1.MongoInvalidArgumentError('Option "allowDiskUse" requires a sort specification');
}
// As of 6.0 the default is true. This allows users to get back to the old behavior.
if (!allow) {
this[kBuiltOptions].allowDiskUse = false;
return this;
}
this[kBuiltOptions].allowDiskUse = true;
return this;
}
/**
* Set the collation options for the cursor.
*
* @param value - The cursor collation options (MongoDB 3.4 or higher) settings for update operation (see 3.4 documentation for available fields).
*/
collation(value) {
(0, abstract_cursor_1.assertUninitialized)(this);
this[kBuiltOptions].collation = value;
return this;
}
/**
* Set the limit for the cursor.
*
* @param value - The limit for the cursor query.
*/
limit(value) {
(0, abstract_cursor_1.assertUninitialized)(this);
if (this[kBuiltOptions].tailable) {
throw new error_1.MongoTailableCursorError('Tailable cursor does not support limit');
}
if (typeof value !== 'number') {
throw new error_1.MongoInvalidArgumentError('Operation "limit" requires an integer');
}
this[kBuiltOptions].limit = value;
return this;
}
/**
* Set the skip for the cursor.
*
* @param value - The skip for the cursor query.
*/
skip(value) {
(0, abstract_cursor_1.assertUninitialized)(this);
if (this[kBuiltOptions].tailable) {
throw new error_1.MongoTailableCursorError('Tailable cursor does not support skip');
}
if (typeof value !== 'number') {
throw new error_1.MongoInvalidArgumentError('Operation "skip" requires an integer');
}
this[kBuiltOptions].skip = value;
return this;
}
}
exports.FindCursor = FindCursor;
//# sourceMappingURL=find_cursor.js.map

1
node_modules/mongodb/lib/cursor/find_cursor.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,37 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.ListCollectionsCursor = void 0;
const execute_operation_1 = require("../operations/execute_operation");
const list_collections_1 = require("../operations/list_collections");
const abstract_cursor_1 = require("./abstract_cursor");
/** @public */
class ListCollectionsCursor extends abstract_cursor_1.AbstractCursor {
constructor(db, filter, options) {
super(db.s.client, db.s.namespace, options);
this.parent = db;
this.filter = filter;
this.options = options;
}
clone() {
return new ListCollectionsCursor(this.parent, this.filter, {
...this.options,
...this.cursorOptions
});
}
/** @internal */
_initialize(session, callback) {
const operation = new list_collections_1.ListCollectionsOperation(this.parent, this.filter, {
...this.cursorOptions,
...this.options,
session
});
(0, execute_operation_1.executeOperation)(this.parent.s.client, operation, (err, response) => {
if (err || response == null)
return callback(err);
// TODO: NODE-2882
callback(undefined, { server: operation.server, session, response });
});
}
}
exports.ListCollectionsCursor = ListCollectionsCursor;
//# sourceMappingURL=list_collections_cursor.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"list_collections_cursor.js","sourceRoot":"","sources":["../../src/cursor/list_collections_cursor.ts"],"names":[],"mappings":";;;AAEA,uEAAoF;AACpF,qEAIwC;AAGxC,uDAAmD;AAEnD,cAAc;AACd,MAAa,qBAIX,SAAQ,gCAAiB;IAKzB,YAAY,EAAM,EAAE,MAAgB,EAAE,OAAgC;QACpE,KAAK,CAAC,EAAE,CAAC,CAAC,CAAC,MAAM,EAAE,EAAE,CAAC,CAAC,CAAC,SAAS,EAAE,OAAO,CAAC,CAAC;QAC5C,IAAI,CAAC,MAAM,GAAG,EAAE,CAAC;QACjB,IAAI,CAAC,MAAM,GAAG,MAAM,CAAC;QACrB,IAAI,CAAC,OAAO,GAAG,OAAO,CAAC;IACzB,CAAC;IAED,KAAK;QACH,OAAO,IAAI,qBAAqB,CAAC,IAAI,CAAC,MAAM,EAAE,IAAI,CAAC,MAAM,EAAE;YACzD,GAAG,IAAI,CAAC,OAAO;YACf,GAAG,IAAI,CAAC,aAAa;SACtB,CAAC,CAAC;IACL,CAAC;IAED,gBAAgB;IAChB,WAAW,CAAC,OAAkC,EAAE,QAAmC;QACjF,MAAM,SAAS,GAAG,IAAI,2CAAwB,CAAC,IAAI,CAAC,MAAM,EAAE,IAAI,CAAC,MAAM,EAAE;YACvE,GAAG,IAAI,CAAC,aAAa;YACrB,GAAG,IAAI,CAAC,OAAO;YACf,OAAO;SACR,CAAC,CAAC;QAEH,IAAA,oCAAgB,EAAC,IAAI,CAAC,MAAM,CAAC,CAAC,CAAC,MAAM,EAAE,SAAS,EAAE,CAAC,GAAG,EAAE,QAAQ,EAAE,EAAE;YAClE,IAAI,GAAG,IAAI,QAAQ,IAAI,IAAI;gBAAE,OAAO,QAAQ,CAAC,GAAG,CAAC,CAAC;YAElD,kBAAkB;YAClB,QAAQ,CAAC,SAAS,EAAE,EAAE,MAAM,EAAE,SAAS,CAAC,MAAM,EAAE,OAAO,EAAE,QAAQ,EAAE,CAAC,CAAC;QACvE,CAAC,CAAC,CAAC;IACL,CAAC;CACF;AAtCD,sDAsCC"}

36
node_modules/mongodb/lib/cursor/list_indexes_cursor.js generated vendored Normal file
View file

@ -0,0 +1,36 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.ListIndexesCursor = void 0;
const execute_operation_1 = require("../operations/execute_operation");
const indexes_1 = require("../operations/indexes");
const abstract_cursor_1 = require("./abstract_cursor");
/** @public */
class ListIndexesCursor extends abstract_cursor_1.AbstractCursor {
constructor(collection, options) {
super(collection.s.db.s.client, collection.s.namespace, options);
this.parent = collection;
this.options = options;
}
clone() {
return new ListIndexesCursor(this.parent, {
...this.options,
...this.cursorOptions
});
}
/** @internal */
_initialize(session, callback) {
const operation = new indexes_1.ListIndexesOperation(this.parent, {
...this.cursorOptions,
...this.options,
session
});
(0, execute_operation_1.executeOperation)(this.parent.s.db.s.client, operation, (err, response) => {
if (err || response == null)
return callback(err);
// TODO: NODE-2882
callback(undefined, { server: operation.server, session, response });
});
}
}
exports.ListIndexesCursor = ListIndexesCursor;
//# sourceMappingURL=list_indexes_cursor.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"list_indexes_cursor.js","sourceRoot":"","sources":["../../src/cursor/list_indexes_cursor.ts"],"names":[],"mappings":";;;AACA,uEAAoF;AACpF,mDAAiF;AAGjF,uDAAmD;AAEnD,cAAc;AACd,MAAa,iBAAkB,SAAQ,gCAAc;IAInD,YAAY,UAAsB,EAAE,OAA4B;QAC9D,KAAK,CAAC,UAAU,CAAC,CAAC,CAAC,EAAE,CAAC,CAAC,CAAC,MAAM,EAAE,UAAU,CAAC,CAAC,CAAC,SAAS,EAAE,OAAO,CAAC,CAAC;QACjE,IAAI,CAAC,MAAM,GAAG,UAAU,CAAC;QACzB,IAAI,CAAC,OAAO,GAAG,OAAO,CAAC;IACzB,CAAC;IAED,KAAK;QACH,OAAO,IAAI,iBAAiB,CAAC,IAAI,CAAC,MAAM,EAAE;YACxC,GAAG,IAAI,CAAC,OAAO;YACf,GAAG,IAAI,CAAC,aAAa;SACtB,CAAC,CAAC;IACL,CAAC;IAED,gBAAgB;IAChB,WAAW,CAAC,OAAkC,EAAE,QAAmC;QACjF,MAAM,SAAS,GAAG,IAAI,8BAAoB,CAAC,IAAI,CAAC,MAAM,EAAE;YACtD,GAAG,IAAI,CAAC,aAAa;YACrB,GAAG,IAAI,CAAC,OAAO;YACf,OAAO;SACR,CAAC,CAAC;QAEH,IAAA,oCAAgB,EAAC,IAAI,CAAC,MAAM,CAAC,CAAC,CAAC,EAAE,CAAC,CAAC,CAAC,MAAM,EAAE,SAAS,EAAE,CAAC,GAAG,EAAE,QAAQ,EAAE,EAAE;YACvE,IAAI,GAAG,IAAI,QAAQ,IAAI,IAAI;gBAAE,OAAO,QAAQ,CAAC,GAAG,CAAC,CAAC;YAElD,kBAAkB;YAClB,QAAQ,CAAC,SAAS,EAAE,EAAE,MAAM,EAAE,SAAS,CAAC,MAAM,EAAE,OAAO,EAAE,QAAQ,EAAE,CAAC,CAAC;QACvE,CAAC,CAAC,CAAC;IACL,CAAC;CACF;AAhCD,8CAgCC"}

349
node_modules/mongodb/lib/db.js generated vendored Normal file
View file

@ -0,0 +1,349 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.Db = void 0;
const admin_1 = require("./admin");
const bson_1 = require("./bson");
const change_stream_1 = require("./change_stream");
const collection_1 = require("./collection");
const CONSTANTS = require("./constants");
const aggregation_cursor_1 = require("./cursor/aggregation_cursor");
const list_collections_cursor_1 = require("./cursor/list_collections_cursor");
const error_1 = require("./error");
const add_user_1 = require("./operations/add_user");
const collections_1 = require("./operations/collections");
const create_collection_1 = require("./operations/create_collection");
const drop_1 = require("./operations/drop");
const execute_operation_1 = require("./operations/execute_operation");
const indexes_1 = require("./operations/indexes");
const profiling_level_1 = require("./operations/profiling_level");
const remove_user_1 = require("./operations/remove_user");
const rename_1 = require("./operations/rename");
const run_command_1 = require("./operations/run_command");
const set_profiling_level_1 = require("./operations/set_profiling_level");
const stats_1 = require("./operations/stats");
const read_concern_1 = require("./read_concern");
const read_preference_1 = require("./read_preference");
const utils_1 = require("./utils");
const write_concern_1 = require("./write_concern");
// Allowed parameters
const DB_OPTIONS_ALLOW_LIST = [
'writeConcern',
'readPreference',
'readPreferenceTags',
'native_parser',
'forceServerObjectId',
'pkFactory',
'serializeFunctions',
'raw',
'authSource',
'ignoreUndefined',
'readConcern',
'retryMiliSeconds',
'numberOfRetries',
'useBigInt64',
'promoteBuffers',
'promoteLongs',
'bsonRegExp',
'enableUtf8Validation',
'promoteValues',
'compression',
'retryWrites'
];
/**
* The **Db** class is a class that represents a MongoDB Database.
* @public
*
* @example
* ```ts
* import { MongoClient } from 'mongodb';
*
* interface Pet {
* name: string;
* kind: 'dog' | 'cat' | 'fish';
* }
*
* const client = new MongoClient('mongodb://localhost:27017');
* const db = client.db();
*
* // Create a collection that validates our union
* await db.createCollection<Pet>('pets', {
* validator: { $expr: { $in: ['$kind', ['dog', 'cat', 'fish']] } }
* })
* ```
*/
class Db {
/**
* Creates a new Db instance
*
* @param client - The MongoClient for the database.
* @param databaseName - The name of the database this instance represents.
* @param options - Optional settings for Db construction
*/
constructor(client, databaseName, options) {
options = options ?? {};
// Filter the options
options = (0, utils_1.filterOptions)(options, DB_OPTIONS_ALLOW_LIST);
// Ensure we have a valid db name
validateDatabaseName(databaseName);
// Internal state of the db object
this.s = {
// Client
client,
// Options
options,
// Unpack read preference
readPreference: read_preference_1.ReadPreference.fromOptions(options),
// Merge bson options
bsonOptions: (0, bson_1.resolveBSONOptions)(options, client),
// Set up the primary key factory or fallback to ObjectId
pkFactory: options?.pkFactory ?? utils_1.DEFAULT_PK_FACTORY,
// ReadConcern
readConcern: read_concern_1.ReadConcern.fromOptions(options),
writeConcern: write_concern_1.WriteConcern.fromOptions(options),
// Namespace
namespace: new utils_1.MongoDBNamespace(databaseName)
};
}
get databaseName() {
return this.s.namespace.db;
}
// Options
get options() {
return this.s.options;
}
/**
* Check if a secondary can be used (because the read preference is *not* set to primary)
*/
get secondaryOk() {
return this.s.readPreference?.preference !== 'primary' || false;
}
get readConcern() {
return this.s.readConcern;
}
/**
* The current readPreference of the Db. If not explicitly defined for
* this Db, will be inherited from the parent MongoClient
*/
get readPreference() {
if (this.s.readPreference == null) {
return this.s.client.readPreference;
}
return this.s.readPreference;
}
get bsonOptions() {
return this.s.bsonOptions;
}
// get the write Concern
get writeConcern() {
return this.s.writeConcern;
}
get namespace() {
return this.s.namespace.toString();
}
/**
* Create a new collection on a server with the specified options. Use this to create capped collections.
* More information about command options available at https://docs.mongodb.com/manual/reference/command/create/
*
* @param name - The name of the collection to create
* @param options - Optional settings for the command
*/
async createCollection(name, options) {
return (0, execute_operation_1.executeOperation)(this.s.client, new create_collection_1.CreateCollectionOperation(this, name, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Execute a command
*
* @remarks
* This command does not inherit options from the MongoClient.
*
* @param command - The command to run
* @param options - Optional settings for the command
*/
async command(command, options) {
// Intentionally, we do not inherit options from parent for this operation.
return (0, execute_operation_1.executeOperation)(this.s.client, new run_command_1.RunCommandOperation(this, command, options));
}
/**
* Execute an aggregation framework pipeline against the database, needs MongoDB \>= 3.6
*
* @param pipeline - An array of aggregation stages to be executed
* @param options - Optional settings for the command
*/
aggregate(pipeline = [], options) {
return new aggregation_cursor_1.AggregationCursor(this.s.client, this.s.namespace, pipeline, (0, utils_1.resolveOptions)(this, options));
}
/** Return the Admin db instance */
admin() {
return new admin_1.Admin(this);
}
/**
* Returns a reference to a MongoDB Collection. If it does not exist it will be created implicitly.
*
* @param name - the collection name we wish to access.
* @returns return the new Collection instance
*/
collection(name, options = {}) {
if (typeof options === 'function') {
throw new error_1.MongoInvalidArgumentError('The callback form of this helper has been removed.');
}
return new collection_1.Collection(this, name, (0, utils_1.resolveOptions)(this, options));
}
/**
* Get all the db statistics.
*
* @param options - Optional settings for the command
*/
async stats(options) {
return (0, execute_operation_1.executeOperation)(this.s.client, new stats_1.DbStatsOperation(this, (0, utils_1.resolveOptions)(this, options)));
}
listCollections(filter = {}, options = {}) {
return new list_collections_cursor_1.ListCollectionsCursor(this, filter, (0, utils_1.resolveOptions)(this, options));
}
/**
* Rename a collection.
*
* @remarks
* This operation does not inherit options from the MongoClient.
*
* @param fromCollection - Name of current collection to rename
* @param toCollection - New name of of the collection
* @param options - Optional settings for the command
*/
async renameCollection(fromCollection, toCollection, options) {
// Intentionally, we do not inherit options from parent for this operation.
return (0, execute_operation_1.executeOperation)(this.s.client, new rename_1.RenameOperation(this.collection(fromCollection), toCollection, { ...options, new_collection: true, readPreference: read_preference_1.ReadPreference.primary }));
}
/**
* Drop a collection from the database, removing it permanently. New accesses will create a new collection.
*
* @param name - Name of collection to drop
* @param options - Optional settings for the command
*/
async dropCollection(name, options) {
return (0, execute_operation_1.executeOperation)(this.s.client, new drop_1.DropCollectionOperation(this, name, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Drop a database, removing it permanently from the server.
*
* @param options - Optional settings for the command
*/
async dropDatabase(options) {
return (0, execute_operation_1.executeOperation)(this.s.client, new drop_1.DropDatabaseOperation(this, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Fetch all collections for the current db.
*
* @param options - Optional settings for the command
*/
async collections(options) {
return (0, execute_operation_1.executeOperation)(this.s.client, new collections_1.CollectionsOperation(this, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Creates an index on the db and collection.
*
* @param name - Name of the collection to create the index on.
* @param indexSpec - Specify the field to index, or an index specification
* @param options - Optional settings for the command
*/
async createIndex(name, indexSpec, options) {
return (0, execute_operation_1.executeOperation)(this.s.client, new indexes_1.CreateIndexOperation(this, name, indexSpec, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Add a user to the database
*
* @param username - The username for the new user
* @param passwordOrOptions - An optional password for the new user, or the options for the command
* @param options - Optional settings for the command
*/
async addUser(username, passwordOrOptions, options) {
options =
options != null && typeof options === 'object'
? options
: passwordOrOptions != null && typeof passwordOrOptions === 'object'
? passwordOrOptions
: undefined;
const password = typeof passwordOrOptions === 'string' ? passwordOrOptions : undefined;
return (0, execute_operation_1.executeOperation)(this.s.client, new add_user_1.AddUserOperation(this, username, password, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Remove a user from a database
*
* @param username - The username to remove
* @param options - Optional settings for the command
*/
async removeUser(username, options) {
return (0, execute_operation_1.executeOperation)(this.s.client, new remove_user_1.RemoveUserOperation(this, username, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Set the current profiling level of MongoDB
*
* @param level - The new profiling level (off, slow_only, all).
* @param options - Optional settings for the command
*/
async setProfilingLevel(level, options) {
return (0, execute_operation_1.executeOperation)(this.s.client, new set_profiling_level_1.SetProfilingLevelOperation(this, level, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Retrieve the current profiling Level for MongoDB
*
* @param options - Optional settings for the command
*/
async profilingLevel(options) {
return (0, execute_operation_1.executeOperation)(this.s.client, new profiling_level_1.ProfilingLevelOperation(this, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Retrieves this collections index info.
*
* @param name - The name of the collection.
* @param options - Optional settings for the command
*/
async indexInformation(name, options) {
return (0, execute_operation_1.executeOperation)(this.s.client, new indexes_1.IndexInformationOperation(this, name, (0, utils_1.resolveOptions)(this, options)));
}
/**
* Create a new Change Stream, watching for new changes (insertions, updates,
* replacements, deletions, and invalidations) in this database. Will ignore all
* changes to system collections.
*
* @remarks
* watch() accepts two generic arguments for distinct use cases:
* - The first is to provide the schema that may be defined for all the collections within this database
* - The second is to override the shape of the change stream document entirely, if it is not provided the type will default to ChangeStreamDocument of the first argument
*
* @param pipeline - An array of {@link https://docs.mongodb.com/manual/reference/operator/aggregation-pipeline/|aggregation pipeline stages} through which to pass change stream documents. This allows for filtering (using $match) and manipulating the change stream documents.
* @param options - Optional settings for the command
* @typeParam TSchema - Type of the data being detected by the change stream
* @typeParam TChange - Type of the whole change stream document emitted
*/
watch(pipeline = [], options = {}) {
// Allow optionally not specifying a pipeline
if (!Array.isArray(pipeline)) {
options = pipeline;
pipeline = [];
}
return new change_stream_1.ChangeStream(this, pipeline, (0, utils_1.resolveOptions)(this, options));
}
}
exports.Db = Db;
Db.SYSTEM_NAMESPACE_COLLECTION = CONSTANTS.SYSTEM_NAMESPACE_COLLECTION;
Db.SYSTEM_INDEX_COLLECTION = CONSTANTS.SYSTEM_INDEX_COLLECTION;
Db.SYSTEM_PROFILE_COLLECTION = CONSTANTS.SYSTEM_PROFILE_COLLECTION;
Db.SYSTEM_USER_COLLECTION = CONSTANTS.SYSTEM_USER_COLLECTION;
Db.SYSTEM_COMMAND_COLLECTION = CONSTANTS.SYSTEM_COMMAND_COLLECTION;
Db.SYSTEM_JS_COLLECTION = CONSTANTS.SYSTEM_JS_COLLECTION;
// TODO(NODE-3484): Refactor into MongoDBNamespace
// Validate the database name
function validateDatabaseName(databaseName) {
if (typeof databaseName !== 'string')
throw new error_1.MongoInvalidArgumentError('Database name must be a string');
if (databaseName.length === 0)
throw new error_1.MongoInvalidArgumentError('Database name cannot be the empty string');
if (databaseName === '$external')
return;
const invalidChars = [' ', '.', '$', '/', '\\'];
for (let i = 0; i < invalidChars.length; i++) {
if (databaseName.indexOf(invalidChars[i]) !== -1)
throw new error_1.MongoAPIError(`database names cannot contain the character '${invalidChars[i]}'`);
}
}
//# sourceMappingURL=db.js.map

1
node_modules/mongodb/lib/db.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

69
node_modules/mongodb/lib/deps.js generated vendored Normal file
View file

@ -0,0 +1,69 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.AutoEncryptionLoggerLevel = exports.aws4 = exports.saslprep = exports.Snappy = exports.getAwsCredentialProvider = exports.ZStandard = exports.Kerberos = void 0;
const error_1 = require("./error");
function makeErrorModule(error) {
const props = error ? { kModuleError: error } : {};
return new Proxy(props, {
get: (_, key) => {
if (key === 'kModuleError') {
return error;
}
throw error;
},
set: () => {
throw error;
}
});
}
exports.Kerberos = makeErrorModule(new error_1.MongoMissingDependencyError('Optional module `kerberos` not found. Please install it to enable kerberos authentication'));
try {
// Ensure you always wrap an optional require in the try block NODE-3199
exports.Kerberos = require('kerberos');
}
catch { } // eslint-disable-line
exports.ZStandard = makeErrorModule(new error_1.MongoMissingDependencyError('Optional module `@mongodb-js/zstd` not found. Please install it to enable zstd compression'));
try {
exports.ZStandard = require('@mongodb-js/zstd');
}
catch { } // eslint-disable-line
function getAwsCredentialProvider() {
try {
// Ensure you always wrap an optional require in the try block NODE-3199
const credentialProvider = require('@aws-sdk/credential-providers');
return credentialProvider;
}
catch {
return makeErrorModule(new error_1.MongoMissingDependencyError('Optional module `@aws-sdk/credential-providers` not found.' +
' Please install it to enable getting aws credentials via the official sdk.'));
}
}
exports.getAwsCredentialProvider = getAwsCredentialProvider;
exports.Snappy = makeErrorModule(new error_1.MongoMissingDependencyError('Optional module `snappy` not found. Please install it to enable snappy compression'));
try {
// Ensure you always wrap an optional require in the try block NODE-3199
exports.Snappy = require('snappy');
}
catch { } // eslint-disable-line
exports.saslprep = makeErrorModule(new error_1.MongoMissingDependencyError('Optional module `saslprep` not found.' +
' Please install it to enable Stringprep Profile for User Names and Passwords'));
try {
// Ensure you always wrap an optional require in the try block NODE-3199
exports.saslprep = require('saslprep');
}
catch { } // eslint-disable-line
exports.aws4 = makeErrorModule(new error_1.MongoMissingDependencyError('Optional module `aws4` not found. Please install it to enable AWS authentication'));
try {
// Ensure you always wrap an optional require in the try block NODE-3199
exports.aws4 = require('aws4');
}
catch { } // eslint-disable-line
/** @public */
exports.AutoEncryptionLoggerLevel = Object.freeze({
FatalError: 0,
Error: 1,
Warning: 2,
Info: 3,
Trace: 4
});
//# sourceMappingURL=deps.js.map

1
node_modules/mongodb/lib/deps.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"deps.js","sourceRoot":"","sources":["../src/deps.ts"],"names":[],"mappings":";;;AAIA,mCAAsD;AAItD,SAAS,eAAe,CAAC,KAAU;IACjC,MAAM,KAAK,GAAG,KAAK,CAAC,CAAC,CAAC,EAAE,YAAY,EAAE,KAAK,EAAE,CAAC,CAAC,CAAC,EAAE,CAAC;IACnD,OAAO,IAAI,KAAK,CAAC,KAAK,EAAE;QACtB,GAAG,EAAE,CAAC,CAAM,EAAE,GAAQ,EAAE,EAAE;YACxB,IAAI,GAAG,KAAK,cAAc,EAAE;gBAC1B,OAAO,KAAK,CAAC;aACd;YACD,MAAM,KAAK,CAAC;QACd,CAAC;QACD,GAAG,EAAE,GAAG,EAAE;YACR,MAAM,KAAK,CAAC;QACd,CAAC;KACF,CAAC,CAAC;AACL,CAAC;AAEU,QAAA,QAAQ,GACjB,eAAe,CACb,IAAI,mCAA2B,CAC7B,2FAA2F,CAC5F,CACF,CAAC;AAEJ,IAAI;IACF,wEAAwE;IACxE,gBAAQ,GAAG,OAAO,CAAC,UAAU,CAAC,CAAC;CAChC;AAAC,MAAM,GAAE,CAAC,sBAAsB;AAwBtB,QAAA,SAAS,GAClB,eAAe,CACb,IAAI,mCAA2B,CAC7B,4FAA4F,CAC7F,CACF,CAAC;AAEJ,IAAI;IACF,iBAAS,GAAG,OAAO,CAAC,kBAAkB,CAAC,CAAC;CACzC;AAAC,MAAM,GAAE,CAAC,sBAAsB;AAMjC,SAAgB,wBAAwB;IAGtC,IAAI;QACF,wEAAwE;QACxE,MAAM,kBAAkB,GAAG,OAAO,CAAC,+BAA+B,CAAC,CAAC;QACpE,OAAO,kBAAkB,CAAC;KAC3B;IAAC,MAAM;QACN,OAAO,eAAe,CACpB,IAAI,mCAA2B,CAC7B,4DAA4D;YAC1D,4EAA4E,CAC/E,CACF,CAAC;KACH;AACH,CAAC;AAfD,4DAeC;AAgBU,QAAA,MAAM,GAA8D,eAAe,CAC5F,IAAI,mCAA2B,CAC7B,oFAAoF,CACrF,CACF,CAAC;AAEF,IAAI;IACF,wEAAwE;IACxE,cAAM,GAAG,OAAO,CAAC,QAAQ,CAAC,CAAC;CAC5B;AAAC,MAAM,GAAE,CAAC,sBAAsB;AAEtB,QAAA,QAAQ,GACjB,eAAe,CACb,IAAI,mCAA2B,CAC7B,uCAAuC;IACrC,8EAA8E,CACjF,CACF,CAAC;AAEJ,IAAI;IACF,wEAAwE;IACxE,gBAAQ,GAAG,OAAO,CAAC,UAAU,CAAC,CAAC;CAChC;AAAC,MAAM,GAAE,CAAC,sBAAsB;AA2CtB,QAAA,IAAI,GAAyD,eAAe,CACrF,IAAI,mCAA2B,CAC7B,kFAAkF,CACnF,CACF,CAAC;AAEF,IAAI;IACF,wEAAwE;IACxE,YAAI,GAAG,OAAO,CAAC,MAAM,CAAC,CAAC;CACxB;AAAC,MAAM,GAAE,CAAC,sBAAsB;AAEjC,cAAc;AACD,QAAA,yBAAyB,GAAG,MAAM,CAAC,MAAM,CAAC;IACrD,UAAU,EAAE,CAAC;IACb,KAAK,EAAE,CAAC;IACR,OAAO,EAAE,CAAC;IACV,IAAI,EAAE,CAAC;IACP,KAAK,EAAE,CAAC;CACA,CAAC,CAAC"}

103
node_modules/mongodb/lib/encrypter.js generated vendored Normal file
View file

@ -0,0 +1,103 @@
"use strict";
/* eslint-disable @typescript-eslint/no-var-requires */
Object.defineProperty(exports, "__esModule", { value: true });
exports.Encrypter = void 0;
const constants_1 = require("./constants");
const error_1 = require("./error");
const mongo_client_1 = require("./mongo_client");
const utils_1 = require("./utils");
let AutoEncrypterClass;
/** @internal */
const kInternalClient = Symbol('internalClient');
/** @internal */
class Encrypter {
constructor(client, uri, options) {
if (typeof options.autoEncryption !== 'object') {
throw new error_1.MongoInvalidArgumentError('Option "autoEncryption" must be specified');
}
// initialize to null, if we call getInternalClient, we may set this it is important to not overwrite those function calls.
this[kInternalClient] = null;
this.bypassAutoEncryption = !!options.autoEncryption.bypassAutoEncryption;
this.needsConnecting = false;
if (options.maxPoolSize === 0 && options.autoEncryption.keyVaultClient == null) {
options.autoEncryption.keyVaultClient = client;
}
else if (options.autoEncryption.keyVaultClient == null) {
options.autoEncryption.keyVaultClient = this.getInternalClient(client, uri, options);
}
if (this.bypassAutoEncryption) {
options.autoEncryption.metadataClient = undefined;
}
else if (options.maxPoolSize === 0) {
options.autoEncryption.metadataClient = client;
}
else {
options.autoEncryption.metadataClient = this.getInternalClient(client, uri, options);
}
if (options.proxyHost) {
options.autoEncryption.proxyOptions = {
proxyHost: options.proxyHost,
proxyPort: options.proxyPort,
proxyUsername: options.proxyUsername,
proxyPassword: options.proxyPassword
};
}
this.autoEncrypter = new AutoEncrypterClass(client, options.autoEncryption);
}
getInternalClient(client, uri, options) {
// TODO(NODE-4144): Remove new variable for type narrowing
let internalClient = this[kInternalClient];
if (internalClient == null) {
const clonedOptions = {};
for (const key of [
...Object.getOwnPropertyNames(options),
...Object.getOwnPropertySymbols(options)
]) {
if (['autoEncryption', 'minPoolSize', 'servers', 'caseTranslate', 'dbName'].includes(key))
continue;
Reflect.set(clonedOptions, key, Reflect.get(options, key));
}
clonedOptions.minPoolSize = 0;
internalClient = new mongo_client_1.MongoClient(uri, clonedOptions);
this[kInternalClient] = internalClient;
for (const eventName of constants_1.MONGO_CLIENT_EVENTS) {
for (const listener of client.listeners(eventName)) {
internalClient.on(eventName, listener);
}
}
client.on('newListener', (eventName, listener) => {
internalClient?.on(eventName, listener);
});
this.needsConnecting = true;
}
return internalClient;
}
async connectInternalClient() {
// TODO(NODE-4144): Remove new variable for type narrowing
const internalClient = this[kInternalClient];
if (this.needsConnecting && internalClient != null) {
this.needsConnecting = false;
await internalClient.connect();
}
}
close(client, force, callback) {
this.autoEncrypter.teardown(!!force, e => {
const internalClient = this[kInternalClient];
if (internalClient != null && client !== internalClient) {
internalClient.close(force).then(() => callback(), error => callback(error));
return;
}
callback(e);
});
}
static checkForMongoCrypt() {
const mongodbClientEncryption = (0, utils_1.getMongoDBClientEncryption)();
if (mongodbClientEncryption == null) {
throw new error_1.MongoMissingDependencyError('Auto-encryption requested, but the module is not installed. ' +
'Please add `mongodb-client-encryption` as a dependency of your project');
}
AutoEncrypterClass = mongodbClientEncryption.extension(require('../lib/index')).AutoEncrypter;
}
}
exports.Encrypter = Encrypter;
//# sourceMappingURL=encrypter.js.map

1
node_modules/mongodb/lib/encrypter.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"encrypter.js","sourceRoot":"","sources":["../src/encrypter.ts"],"names":[],"mappings":";AAAA,uDAAuD;;;AAEvD,2CAAkD;AAElD,mCAAiF;AACjF,iDAAiE;AACjE,mCAA+D;AAE/D,IAAI,kBAA0F,CAAC;AAE/F,gBAAgB;AAChB,MAAM,eAAe,GAAG,MAAM,CAAC,gBAAgB,CAAC,CAAC;AAQjD,gBAAgB;AAChB,MAAa,SAAS;IAMpB,YAAY,MAAmB,EAAE,GAAW,EAAE,OAA2B;QACvE,IAAI,OAAO,OAAO,CAAC,cAAc,KAAK,QAAQ,EAAE;YAC9C,MAAM,IAAI,iCAAyB,CAAC,2CAA2C,CAAC,CAAC;SAClF;QACD,2HAA2H;QAC3H,IAAI,CAAC,eAAe,CAAC,GAAG,IAAI,CAAC;QAE7B,IAAI,CAAC,oBAAoB,GAAG,CAAC,CAAC,OAAO,CAAC,cAAc,CAAC,oBAAoB,CAAC;QAC1E,IAAI,CAAC,eAAe,GAAG,KAAK,CAAC;QAE7B,IAAI,OAAO,CAAC,WAAW,KAAK,CAAC,IAAI,OAAO,CAAC,cAAc,CAAC,cAAc,IAAI,IAAI,EAAE;YAC9E,OAAO,CAAC,cAAc,CAAC,cAAc,GAAG,MAAM,CAAC;SAChD;aAAM,IAAI,OAAO,CAAC,cAAc,CAAC,cAAc,IAAI,IAAI,EAAE;YACxD,OAAO,CAAC,cAAc,CAAC,cAAc,GAAG,IAAI,CAAC,iBAAiB,CAAC,MAAM,EAAE,GAAG,EAAE,OAAO,CAAC,CAAC;SACtF;QAED,IAAI,IAAI,CAAC,oBAAoB,EAAE;YAC7B,OAAO,CAAC,cAAc,CAAC,cAAc,GAAG,SAAS,CAAC;SACnD;aAAM,IAAI,OAAO,CAAC,WAAW,KAAK,CAAC,EAAE;YACpC,OAAO,CAAC,cAAc,CAAC,cAAc,GAAG,MAAM,CAAC;SAChD;aAAM;YACL,OAAO,CAAC,cAAc,CAAC,cAAc,GAAG,IAAI,CAAC,iBAAiB,CAAC,MAAM,EAAE,GAAG,EAAE,OAAO,CAAC,CAAC;SACtF;QAED,IAAI,OAAO,CAAC,SAAS,EAAE;YACrB,OAAO,CAAC,cAAc,CAAC,YAAY,GAAG;gBACpC,SAAS,EAAE,OAAO,CAAC,SAAS;gBAC5B,SAAS,EAAE,OAAO,CAAC,SAAS;gBAC5B,aAAa,EAAE,OAAO,CAAC,aAAa;gBACpC,aAAa,EAAE,OAAO,CAAC,aAAa;aACrC,CAAC;SACH;QAED,IAAI,CAAC,aAAa,GAAG,IAAI,kBAAkB,CAAC,MAAM,EAAE,OAAO,CAAC,cAAc,CAAC,CAAC;IAC9E,CAAC;IAED,iBAAiB,CAAC,MAAmB,EAAE,GAAW,EAAE,OAA2B;QAC7E,0DAA0D;QAC1D,IAAI,cAAc,GAAG,IAAI,CAAC,eAAe,CAAC,CAAC;QAC3C,IAAI,cAAc,IAAI,IAAI,EAAE;YAC1B,MAAM,aAAa,GAAuB,EAAE,CAAC;YAE7C,KAAK,MAAM,GAAG,IAAI;gBAChB,GAAG,MAAM,CAAC,mBAAmB,CAAC,OAAO,CAAC;gBACtC,GAAG,MAAM,CAAC,qBAAqB,CAAC,OAAO,CAAC;aAC7B,EAAE;gBACb,IAAI,CAAC,gBAAgB,EAAE,aAAa,EAAE,SAAS,EAAE,eAAe,EAAE,QAAQ,CAAC,CAAC,QAAQ,CAAC,GAAG,CAAC;oBACvF,SAAS;gBACX,OAAO,CAAC,GAAG,CAAC,aAAa,EAAE,GAAG,EAAE,OAAO,CAAC,GAAG,CAAC,OAAO,EAAE,GAAG,CAAC,CAAC,CAAC;aAC5D;YAED,aAAa,CAAC,WAAW,GAAG,CAAC,CAAC;YAE9B,cAAc,GAAG,IAAI,0BAAW,CAAC,GAAG,EAAE,aAAa,CAAC,CAAC;YACrD,IAAI,CAAC,eAAe,CAAC,GAAG,cAAc,CAAC;YAEvC,KAAK,MAAM,SAAS,IAAI,+BAAmB,EAAE;gBAC3C,KAAK,MAAM,QAAQ,IAAI,MAAM,CAAC,SAAS,CAAC,SAAS,CAAC,EAAE;oBAClD,cAAc,CAAC,EAAE,CAAC,SAAS,EAAE,QAAQ,CAAC,CAAC;iBACxC;aACF;YAED,MAAM,CAAC,EAAE,CAAC,aAAa,EAAE,CAAC,SAAS,EAAE,QAAQ,EAAE,EAAE;gBAC/C,cAAc,EAAE,EAAE,CAAC,SAAS,EAAE,QAAQ,CAAC,CAAC;YAC1C,CAAC,CAAC,CAAC;YAEH,IAAI,CAAC,eAAe,GAAG,IAAI,CAAC;SAC7B;QACD,OAAO,cAAc,CAAC;IACxB,CAAC;IAED,KAAK,CAAC,qBAAqB;QACzB,0DAA0D;QAC1D,MAAM,cAAc,GAAG,IAAI,CAAC,eAAe,CAAC,CAAC;QAC7C,IAAI,IAAI,CAAC,eAAe,IAAI,cAAc,IAAI,IAAI,EAAE;YAClD,IAAI,CAAC,eAAe,GAAG,KAAK,CAAC;YAC7B,MAAM,cAAc,CAAC,OAAO,EAAE,CAAC;SAChC;IACH,CAAC;IAED,KAAK,CAAC,MAAmB,EAAE,KAAc,EAAE,QAAkB;QAC3D,IAAI,CAAC,aAAa,CAAC,QAAQ,CAAC,CAAC,CAAC,KAAK,EAAE,CAAC,CAAC,EAAE;YACvC,MAAM,cAAc,GAAG,IAAI,CAAC,eAAe,CAAC,CAAC;YAC7C,IAAI,cAAc,IAAI,IAAI,IAAI,MAAM,KAAK,cAAc,EAAE;gBACvD,cAAc,CAAC,KAAK,CAAC,KAAK,CAAC,CAAC,IAAI,CAC9B,GAAG,EAAE,CAAC,QAAQ,EAAE,EAChB,KAAK,CAAC,EAAE,CAAC,QAAQ,CAAC,KAAK,CAAC,CACzB,CAAC;gBACF,OAAO;aACR;YACD,QAAQ,CAAC,CAAC,CAAC,CAAC;QACd,CAAC,CAAC,CAAC;IACL,CAAC;IAED,MAAM,CAAC,kBAAkB;QACvB,MAAM,uBAAuB,GAAG,IAAA,kCAA0B,GAAE,CAAC;QAC7D,IAAI,uBAAuB,IAAI,IAAI,EAAE;YACnC,MAAM,IAAI,mCAA2B,CACnC,8DAA8D;gBAC5D,wEAAwE,CAC3E,CAAC;SACH;QACD,kBAAkB,GAAG,uBAAuB,CAAC,SAAS,CAAC,OAAO,CAAC,cAAc,CAAC,CAAC,CAAC,aAAa,CAAC;IAChG,CAAC;CACF;AA9GD,8BA8GC"}

801
node_modules/mongodb/lib/error.js generated vendored Normal file
View file

@ -0,0 +1,801 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.isResumableError = exports.isNetworkTimeoutError = exports.isSDAMUnrecoverableError = exports.isNodeShuttingDownError = exports.isRetryableReadError = exports.isRetryableWriteError = exports.needsRetryableWriteLabel = exports.MongoWriteConcernError = exports.MongoServerSelectionError = exports.MongoSystemError = exports.MongoMissingDependencyError = exports.MongoMissingCredentialsError = exports.MongoCompatibilityError = exports.MongoInvalidArgumentError = exports.MongoParseError = exports.MongoNetworkTimeoutError = exports.MongoNetworkError = exports.isNetworkErrorBeforeHandshake = exports.MongoTopologyClosedError = exports.MongoCursorExhaustedError = exports.MongoServerClosedError = exports.MongoCursorInUseError = exports.MongoUnexpectedServerResponseError = exports.MongoGridFSChunkError = exports.MongoGridFSStreamError = exports.MongoTailableCursorError = exports.MongoChangeStreamError = exports.MongoAWSError = exports.MongoKerberosError = exports.MongoExpiredSessionError = exports.MongoTransactionError = exports.MongoNotConnectedError = exports.MongoDecompressionError = exports.MongoBatchReExecutionError = exports.MongoRuntimeError = exports.MongoAPIError = exports.MongoDriverError = exports.MongoServerError = exports.MongoError = exports.MongoErrorLabel = exports.GET_MORE_RESUMABLE_CODES = exports.MONGODB_ERROR_CODES = exports.NODE_IS_RECOVERING_ERROR_MESSAGE = exports.LEGACY_NOT_PRIMARY_OR_SECONDARY_ERROR_MESSAGE = exports.LEGACY_NOT_WRITABLE_PRIMARY_ERROR_MESSAGE = void 0;
/** @internal */
const kErrorLabels = Symbol('errorLabels');
/**
* @internal
* The legacy error message from the server that indicates the node is not a writable primary
* https://github.com/mongodb/specifications/blob/b07c26dc40d04ac20349f989db531c9845fdd755/source/server-discovery-and-monitoring/server-discovery-and-monitoring.rst#not-writable-primary-and-node-is-recovering
*/
exports.LEGACY_NOT_WRITABLE_PRIMARY_ERROR_MESSAGE = new RegExp('not master', 'i');
/**
* @internal
* The legacy error message from the server that indicates the node is not a primary or secondary
* https://github.com/mongodb/specifications/blob/b07c26dc40d04ac20349f989db531c9845fdd755/source/server-discovery-and-monitoring/server-discovery-and-monitoring.rst#not-writable-primary-and-node-is-recovering
*/
exports.LEGACY_NOT_PRIMARY_OR_SECONDARY_ERROR_MESSAGE = new RegExp('not master or secondary', 'i');
/**
* @internal
* The error message from the server that indicates the node is recovering
* https://github.com/mongodb/specifications/blob/b07c26dc40d04ac20349f989db531c9845fdd755/source/server-discovery-and-monitoring/server-discovery-and-monitoring.rst#not-writable-primary-and-node-is-recovering
*/
exports.NODE_IS_RECOVERING_ERROR_MESSAGE = new RegExp('node is recovering', 'i');
/** @internal MongoDB Error Codes */
exports.MONGODB_ERROR_CODES = Object.freeze({
HostUnreachable: 6,
HostNotFound: 7,
NetworkTimeout: 89,
ShutdownInProgress: 91,
PrimarySteppedDown: 189,
ExceededTimeLimit: 262,
SocketException: 9001,
NotWritablePrimary: 10107,
InterruptedAtShutdown: 11600,
InterruptedDueToReplStateChange: 11602,
NotPrimaryNoSecondaryOk: 13435,
NotPrimaryOrSecondary: 13436,
StaleShardVersion: 63,
StaleEpoch: 150,
StaleConfig: 13388,
RetryChangeStream: 234,
FailedToSatisfyReadPreference: 133,
CursorNotFound: 43,
LegacyNotPrimary: 10058,
WriteConcernFailed: 64,
NamespaceNotFound: 26,
IllegalOperation: 20,
MaxTimeMSExpired: 50,
UnknownReplWriteConcern: 79,
UnsatisfiableWriteConcern: 100
});
// From spec@https://github.com/mongodb/specifications/blob/f93d78191f3db2898a59013a7ed5650352ef6da8/source/change-streams/change-streams.rst#resumable-error
exports.GET_MORE_RESUMABLE_CODES = new Set([
exports.MONGODB_ERROR_CODES.HostUnreachable,
exports.MONGODB_ERROR_CODES.HostNotFound,
exports.MONGODB_ERROR_CODES.NetworkTimeout,
exports.MONGODB_ERROR_CODES.ShutdownInProgress,
exports.MONGODB_ERROR_CODES.PrimarySteppedDown,
exports.MONGODB_ERROR_CODES.ExceededTimeLimit,
exports.MONGODB_ERROR_CODES.SocketException,
exports.MONGODB_ERROR_CODES.NotWritablePrimary,
exports.MONGODB_ERROR_CODES.InterruptedAtShutdown,
exports.MONGODB_ERROR_CODES.InterruptedDueToReplStateChange,
exports.MONGODB_ERROR_CODES.NotPrimaryNoSecondaryOk,
exports.MONGODB_ERROR_CODES.NotPrimaryOrSecondary,
exports.MONGODB_ERROR_CODES.StaleShardVersion,
exports.MONGODB_ERROR_CODES.StaleEpoch,
exports.MONGODB_ERROR_CODES.StaleConfig,
exports.MONGODB_ERROR_CODES.RetryChangeStream,
exports.MONGODB_ERROR_CODES.FailedToSatisfyReadPreference,
exports.MONGODB_ERROR_CODES.CursorNotFound
]);
/** @public */
exports.MongoErrorLabel = Object.freeze({
RetryableWriteError: 'RetryableWriteError',
TransientTransactionError: 'TransientTransactionError',
UnknownTransactionCommitResult: 'UnknownTransactionCommitResult',
ResumableChangeStreamError: 'ResumableChangeStreamError',
HandshakeError: 'HandshakeError',
ResetPool: 'ResetPool',
InterruptInUseConnections: 'InterruptInUseConnections',
NoWritesPerformed: 'NoWritesPerformed'
});
/**
* @public
* @category Error
*
* @privateRemarks
* mongodb-client-encryption has a dependency on this error, it uses the constructor with a string argument
*/
class MongoError extends Error {
constructor(message) {
if (message instanceof Error) {
super(message.message);
this.cause = message;
}
else {
super(message);
}
this[kErrorLabels] = new Set();
}
get name() {
return 'MongoError';
}
/** Legacy name for server error responses */
get errmsg() {
return this.message;
}
/**
* Checks the error to see if it has an error label
*
* @param label - The error label to check for
* @returns returns true if the error has the provided error label
*/
hasErrorLabel(label) {
return this[kErrorLabels].has(label);
}
addErrorLabel(label) {
this[kErrorLabels].add(label);
}
get errorLabels() {
return Array.from(this[kErrorLabels]);
}
}
exports.MongoError = MongoError;
/**
* An error coming from the mongo server
*
* @public
* @category Error
*/
class MongoServerError extends MongoError {
constructor(message) {
super(message.message || message.errmsg || message.$err || 'n/a');
if (message.errorLabels) {
this[kErrorLabels] = new Set(message.errorLabels);
}
for (const name in message) {
if (name !== 'errorLabels' && name !== 'errmsg' && name !== 'message')
this[name] = message[name];
}
}
get name() {
return 'MongoServerError';
}
}
exports.MongoServerError = MongoServerError;
/**
* An error generated by the driver
*
* @public
* @category Error
*/
class MongoDriverError extends MongoError {
constructor(message) {
super(message);
}
get name() {
return 'MongoDriverError';
}
}
exports.MongoDriverError = MongoDriverError;
/**
* An error generated when the driver API is used incorrectly
*
* @privateRemarks
* Should **never** be directly instantiated
*
* @public
* @category Error
*/
class MongoAPIError extends MongoDriverError {
constructor(message) {
super(message);
}
get name() {
return 'MongoAPIError';
}
}
exports.MongoAPIError = MongoAPIError;
/**
* An error generated when the driver encounters unexpected input
* or reaches an unexpected/invalid internal state
*
* @privateRemarks
* Should **never** be directly instantiated.
*
* @public
* @category Error
*/
class MongoRuntimeError extends MongoDriverError {
constructor(message) {
super(message);
}
get name() {
return 'MongoRuntimeError';
}
}
exports.MongoRuntimeError = MongoRuntimeError;
/**
* An error generated when a batch command is re-executed after one of the commands in the batch
* has failed
*
* @public
* @category Error
*/
class MongoBatchReExecutionError extends MongoAPIError {
constructor(message = 'This batch has already been executed, create new batch to execute') {
super(message);
}
get name() {
return 'MongoBatchReExecutionError';
}
}
exports.MongoBatchReExecutionError = MongoBatchReExecutionError;
/**
* An error generated when the driver fails to decompress
* data received from the server.
*
* @public
* @category Error
*/
class MongoDecompressionError extends MongoRuntimeError {
constructor(message) {
super(message);
}
get name() {
return 'MongoDecompressionError';
}
}
exports.MongoDecompressionError = MongoDecompressionError;
/**
* An error thrown when the user attempts to operate on a database or collection through a MongoClient
* that has not yet successfully called the "connect" method
*
* @public
* @category Error
*/
class MongoNotConnectedError extends MongoAPIError {
constructor(message) {
super(message);
}
get name() {
return 'MongoNotConnectedError';
}
}
exports.MongoNotConnectedError = MongoNotConnectedError;
/**
* An error generated when the user makes a mistake in the usage of transactions.
* (e.g. attempting to commit a transaction with a readPreference other than primary)
*
* @public
* @category Error
*/
class MongoTransactionError extends MongoAPIError {
constructor(message) {
super(message);
}
get name() {
return 'MongoTransactionError';
}
}
exports.MongoTransactionError = MongoTransactionError;
/**
* An error generated when the user attempts to operate
* on a session that has expired or has been closed.
*
* @public
* @category Error
*/
class MongoExpiredSessionError extends MongoAPIError {
constructor(message = 'Cannot use a session that has ended') {
super(message);
}
get name() {
return 'MongoExpiredSessionError';
}
}
exports.MongoExpiredSessionError = MongoExpiredSessionError;
/**
* A error generated when the user attempts to authenticate
* via Kerberos, but fails to connect to the Kerberos client.
*
* @public
* @category Error
*/
class MongoKerberosError extends MongoRuntimeError {
constructor(message) {
super(message);
}
get name() {
return 'MongoKerberosError';
}
}
exports.MongoKerberosError = MongoKerberosError;
/**
* A error generated when the user attempts to authenticate
* via AWS, but fails
*
* @public
* @category Error
*/
class MongoAWSError extends MongoRuntimeError {
constructor(message) {
super(message);
}
get name() {
return 'MongoAWSError';
}
}
exports.MongoAWSError = MongoAWSError;
/**
* An error generated when a ChangeStream operation fails to execute.
*
* @public
* @category Error
*/
class MongoChangeStreamError extends MongoRuntimeError {
constructor(message) {
super(message);
}
get name() {
return 'MongoChangeStreamError';
}
}
exports.MongoChangeStreamError = MongoChangeStreamError;
/**
* An error thrown when the user calls a function or method not supported on a tailable cursor
*
* @public
* @category Error
*/
class MongoTailableCursorError extends MongoAPIError {
constructor(message = 'Tailable cursor does not support this operation') {
super(message);
}
get name() {
return 'MongoTailableCursorError';
}
}
exports.MongoTailableCursorError = MongoTailableCursorError;
/** An error generated when a GridFSStream operation fails to execute.
*
* @public
* @category Error
*/
class MongoGridFSStreamError extends MongoRuntimeError {
constructor(message) {
super(message);
}
get name() {
return 'MongoGridFSStreamError';
}
}
exports.MongoGridFSStreamError = MongoGridFSStreamError;
/**
* An error generated when a malformed or invalid chunk is
* encountered when reading from a GridFSStream.
*
* @public
* @category Error
*/
class MongoGridFSChunkError extends MongoRuntimeError {
constructor(message) {
super(message);
}
get name() {
return 'MongoGridFSChunkError';
}
}
exports.MongoGridFSChunkError = MongoGridFSChunkError;
/**
* An error generated when a **parsable** unexpected response comes from the server.
* This is generally an error where the driver in a state expecting a certain behavior to occur in
* the next message from MongoDB but it receives something else.
* This error **does not** represent an issue with wire message formatting.
*
* #### Example
* When an operation fails, it is the driver's job to retry it. It must perform serverSelection
* again to make sure that it attempts the operation against a server in a good state. If server
* selection returns a server that does not support retryable operations, this error is used.
* This scenario is unlikely as retryable support would also have been determined on the first attempt
* but it is possible the state change could report a selectable server that does not support retries.
*
* @public
* @category Error
*/
class MongoUnexpectedServerResponseError extends MongoRuntimeError {
constructor(message) {
super(message);
}
get name() {
return 'MongoUnexpectedServerResponseError';
}
}
exports.MongoUnexpectedServerResponseError = MongoUnexpectedServerResponseError;
/**
* An error thrown when the user attempts to add options to a cursor that has already been
* initialized
*
* @public
* @category Error
*/
class MongoCursorInUseError extends MongoAPIError {
constructor(message = 'Cursor is already initialized') {
super(message);
}
get name() {
return 'MongoCursorInUseError';
}
}
exports.MongoCursorInUseError = MongoCursorInUseError;
/**
* An error generated when an attempt is made to operate
* on a closed/closing server.
*
* @public
* @category Error
*/
class MongoServerClosedError extends MongoAPIError {
constructor(message = 'Server is closed') {
super(message);
}
get name() {
return 'MongoServerClosedError';
}
}
exports.MongoServerClosedError = MongoServerClosedError;
/**
* An error thrown when an attempt is made to read from a cursor that has been exhausted
*
* @public
* @category Error
*/
class MongoCursorExhaustedError extends MongoAPIError {
constructor(message) {
super(message || 'Cursor is exhausted');
}
get name() {
return 'MongoCursorExhaustedError';
}
}
exports.MongoCursorExhaustedError = MongoCursorExhaustedError;
/**
* An error generated when an attempt is made to operate on a
* dropped, or otherwise unavailable, database.
*
* @public
* @category Error
*/
class MongoTopologyClosedError extends MongoAPIError {
constructor(message = 'Topology is closed') {
super(message);
}
get name() {
return 'MongoTopologyClosedError';
}
}
exports.MongoTopologyClosedError = MongoTopologyClosedError;
/** @internal */
const kBeforeHandshake = Symbol('beforeHandshake');
function isNetworkErrorBeforeHandshake(err) {
return err[kBeforeHandshake] === true;
}
exports.isNetworkErrorBeforeHandshake = isNetworkErrorBeforeHandshake;
/**
* An error indicating an issue with the network, including TCP errors and timeouts.
* @public
* @category Error
*/
class MongoNetworkError extends MongoError {
constructor(message, options) {
super(message);
if (options && typeof options.beforeHandshake === 'boolean') {
this[kBeforeHandshake] = options.beforeHandshake;
}
}
get name() {
return 'MongoNetworkError';
}
}
exports.MongoNetworkError = MongoNetworkError;
/**
* An error indicating a network timeout occurred
* @public
* @category Error
*
* @privateRemarks
* mongodb-client-encryption has a dependency on this error with an instanceof check
*/
class MongoNetworkTimeoutError extends MongoNetworkError {
constructor(message, options) {
super(message, options);
}
get name() {
return 'MongoNetworkTimeoutError';
}
}
exports.MongoNetworkTimeoutError = MongoNetworkTimeoutError;
/**
* An error used when attempting to parse a value (like a connection string)
* @public
* @category Error
*/
class MongoParseError extends MongoDriverError {
constructor(message) {
super(message);
}
get name() {
return 'MongoParseError';
}
}
exports.MongoParseError = MongoParseError;
/**
* An error generated when the user supplies malformed or unexpected arguments
* or when a required argument or field is not provided.
*
*
* @public
* @category Error
*/
class MongoInvalidArgumentError extends MongoAPIError {
constructor(message) {
super(message);
}
get name() {
return 'MongoInvalidArgumentError';
}
}
exports.MongoInvalidArgumentError = MongoInvalidArgumentError;
/**
* An error generated when a feature that is not enabled or allowed for the current server
* configuration is used
*
*
* @public
* @category Error
*/
class MongoCompatibilityError extends MongoAPIError {
constructor(message) {
super(message);
}
get name() {
return 'MongoCompatibilityError';
}
}
exports.MongoCompatibilityError = MongoCompatibilityError;
/**
* An error generated when the user fails to provide authentication credentials before attempting
* to connect to a mongo server instance.
*
*
* @public
* @category Error
*/
class MongoMissingCredentialsError extends MongoAPIError {
constructor(message) {
super(message);
}
get name() {
return 'MongoMissingCredentialsError';
}
}
exports.MongoMissingCredentialsError = MongoMissingCredentialsError;
/**
* An error generated when a required module or dependency is not present in the local environment
*
* @public
* @category Error
*/
class MongoMissingDependencyError extends MongoAPIError {
constructor(message) {
super(message);
}
get name() {
return 'MongoMissingDependencyError';
}
}
exports.MongoMissingDependencyError = MongoMissingDependencyError;
/**
* An error signifying a general system issue
* @public
* @category Error
*/
class MongoSystemError extends MongoError {
constructor(message, reason) {
if (reason && reason.error) {
super(reason.error.message || reason.error);
}
else {
super(message);
}
if (reason) {
this.reason = reason;
}
this.code = reason.error?.code;
}
get name() {
return 'MongoSystemError';
}
}
exports.MongoSystemError = MongoSystemError;
/**
* An error signifying a client-side server selection error
* @public
* @category Error
*/
class MongoServerSelectionError extends MongoSystemError {
constructor(message, reason) {
super(message, reason);
}
get name() {
return 'MongoServerSelectionError';
}
}
exports.MongoServerSelectionError = MongoServerSelectionError;
function makeWriteConcernResultObject(input) {
const output = Object.assign({}, input);
if (output.ok === 0) {
output.ok = 1;
delete output.errmsg;
delete output.code;
delete output.codeName;
}
return output;
}
/**
* An error thrown when the server reports a writeConcernError
* @public
* @category Error
*/
class MongoWriteConcernError extends MongoServerError {
constructor(message, result) {
if (result && Array.isArray(result.errorLabels)) {
message.errorLabels = result.errorLabels;
}
super(message);
this.errInfo = message.errInfo;
if (result != null) {
this.result = makeWriteConcernResultObject(result);
}
}
get name() {
return 'MongoWriteConcernError';
}
}
exports.MongoWriteConcernError = MongoWriteConcernError;
// https://github.com/mongodb/specifications/blob/master/source/retryable-reads/retryable-reads.rst#retryable-error
const RETRYABLE_READ_ERROR_CODES = new Set([
exports.MONGODB_ERROR_CODES.HostUnreachable,
exports.MONGODB_ERROR_CODES.HostNotFound,
exports.MONGODB_ERROR_CODES.NetworkTimeout,
exports.MONGODB_ERROR_CODES.ShutdownInProgress,
exports.MONGODB_ERROR_CODES.PrimarySteppedDown,
exports.MONGODB_ERROR_CODES.SocketException,
exports.MONGODB_ERROR_CODES.NotWritablePrimary,
exports.MONGODB_ERROR_CODES.InterruptedAtShutdown,
exports.MONGODB_ERROR_CODES.InterruptedDueToReplStateChange,
exports.MONGODB_ERROR_CODES.NotPrimaryNoSecondaryOk,
exports.MONGODB_ERROR_CODES.NotPrimaryOrSecondary
]);
// see: https://github.com/mongodb/specifications/blob/master/source/retryable-writes/retryable-writes.rst#terms
const RETRYABLE_WRITE_ERROR_CODES = new Set([
...RETRYABLE_READ_ERROR_CODES,
exports.MONGODB_ERROR_CODES.ExceededTimeLimit
]);
function needsRetryableWriteLabel(error, maxWireVersion) {
// pre-4.4 server, then the driver adds an error label for every valid case
// execute operation will only inspect the label, code/message logic is handled here
if (error instanceof MongoNetworkError) {
return true;
}
if (error instanceof MongoError) {
if ((maxWireVersion >= 9 || error.hasErrorLabel(exports.MongoErrorLabel.RetryableWriteError)) &&
!error.hasErrorLabel(exports.MongoErrorLabel.HandshakeError)) {
// If we already have the error label no need to add it again. 4.4+ servers add the label.
// In the case where we have a handshake error, need to fall down to the logic checking
// the codes.
return false;
}
}
if (error instanceof MongoWriteConcernError) {
return RETRYABLE_WRITE_ERROR_CODES.has(error.result?.code ?? error.code ?? 0);
}
if (error instanceof MongoError && typeof error.code === 'number') {
return RETRYABLE_WRITE_ERROR_CODES.has(error.code);
}
const isNotWritablePrimaryError = exports.LEGACY_NOT_WRITABLE_PRIMARY_ERROR_MESSAGE.test(error.message);
if (isNotWritablePrimaryError) {
return true;
}
const isNodeIsRecoveringError = exports.NODE_IS_RECOVERING_ERROR_MESSAGE.test(error.message);
if (isNodeIsRecoveringError) {
return true;
}
return false;
}
exports.needsRetryableWriteLabel = needsRetryableWriteLabel;
function isRetryableWriteError(error) {
return error.hasErrorLabel(exports.MongoErrorLabel.RetryableWriteError);
}
exports.isRetryableWriteError = isRetryableWriteError;
/** Determines whether an error is something the driver should attempt to retry */
function isRetryableReadError(error) {
const hasRetryableErrorCode = typeof error.code === 'number' ? RETRYABLE_READ_ERROR_CODES.has(error.code) : false;
if (hasRetryableErrorCode) {
return true;
}
if (error instanceof MongoNetworkError) {
return true;
}
const isNotWritablePrimaryError = exports.LEGACY_NOT_WRITABLE_PRIMARY_ERROR_MESSAGE.test(error.message);
if (isNotWritablePrimaryError) {
return true;
}
const isNodeIsRecoveringError = exports.NODE_IS_RECOVERING_ERROR_MESSAGE.test(error.message);
if (isNodeIsRecoveringError) {
return true;
}
return false;
}
exports.isRetryableReadError = isRetryableReadError;
const SDAM_RECOVERING_CODES = new Set([
exports.MONGODB_ERROR_CODES.ShutdownInProgress,
exports.MONGODB_ERROR_CODES.PrimarySteppedDown,
exports.MONGODB_ERROR_CODES.InterruptedAtShutdown,
exports.MONGODB_ERROR_CODES.InterruptedDueToReplStateChange,
exports.MONGODB_ERROR_CODES.NotPrimaryOrSecondary
]);
const SDAM_NOT_PRIMARY_CODES = new Set([
exports.MONGODB_ERROR_CODES.NotWritablePrimary,
exports.MONGODB_ERROR_CODES.NotPrimaryNoSecondaryOk,
exports.MONGODB_ERROR_CODES.LegacyNotPrimary
]);
const SDAM_NODE_SHUTTING_DOWN_ERROR_CODES = new Set([
exports.MONGODB_ERROR_CODES.InterruptedAtShutdown,
exports.MONGODB_ERROR_CODES.ShutdownInProgress
]);
function isRecoveringError(err) {
if (typeof err.code === 'number') {
// If any error code exists, we ignore the error.message
return SDAM_RECOVERING_CODES.has(err.code);
}
return (exports.LEGACY_NOT_PRIMARY_OR_SECONDARY_ERROR_MESSAGE.test(err.message) ||
exports.NODE_IS_RECOVERING_ERROR_MESSAGE.test(err.message));
}
function isNotWritablePrimaryError(err) {
if (typeof err.code === 'number') {
// If any error code exists, we ignore the error.message
return SDAM_NOT_PRIMARY_CODES.has(err.code);
}
if (isRecoveringError(err)) {
return false;
}
return exports.LEGACY_NOT_WRITABLE_PRIMARY_ERROR_MESSAGE.test(err.message);
}
function isNodeShuttingDownError(err) {
return !!(typeof err.code === 'number' && SDAM_NODE_SHUTTING_DOWN_ERROR_CODES.has(err.code));
}
exports.isNodeShuttingDownError = isNodeShuttingDownError;
/**
* Determines whether SDAM can recover from a given error. If it cannot
* then the pool will be cleared, and server state will completely reset
* locally.
*
* @see https://github.com/mongodb/specifications/blob/master/source/server-discovery-and-monitoring/server-discovery-and-monitoring.rst#not-master-and-node-is-recovering
*/
function isSDAMUnrecoverableError(error) {
// NOTE: null check is here for a strictly pre-CMAP world, a timeout or
// close event are considered unrecoverable
if (error instanceof MongoParseError || error == null) {
return true;
}
return isRecoveringError(error) || isNotWritablePrimaryError(error);
}
exports.isSDAMUnrecoverableError = isSDAMUnrecoverableError;
function isNetworkTimeoutError(err) {
return !!(err instanceof MongoNetworkError && err.message.match(/timed out/));
}
exports.isNetworkTimeoutError = isNetworkTimeoutError;
function isResumableError(error, wireVersion) {
if (error == null || !(error instanceof MongoError)) {
return false;
}
if (error instanceof MongoNetworkError) {
return true;
}
if (wireVersion != null && wireVersion >= 9) {
// DRIVERS-1308: For 4.4 drivers running against 4.4 servers, drivers will add a special case to treat the CursorNotFound error code as resumable
if (error.code === exports.MONGODB_ERROR_CODES.CursorNotFound) {
return true;
}
return error.hasErrorLabel(exports.MongoErrorLabel.ResumableChangeStreamError);
}
if (typeof error.code === 'number') {
return exports.GET_MORE_RESUMABLE_CODES.has(error.code);
}
return false;
}
exports.isResumableError = isResumableError;
//# sourceMappingURL=error.js.map

1
node_modules/mongodb/lib/error.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

35
node_modules/mongodb/lib/explain.js generated vendored Normal file
View file

@ -0,0 +1,35 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.Explain = exports.ExplainVerbosity = void 0;
const error_1 = require("./error");
/** @public */
exports.ExplainVerbosity = Object.freeze({
queryPlanner: 'queryPlanner',
queryPlannerExtended: 'queryPlannerExtended',
executionStats: 'executionStats',
allPlansExecution: 'allPlansExecution'
});
/** @internal */
class Explain {
constructor(verbosity) {
if (typeof verbosity === 'boolean') {
this.verbosity = verbosity
? exports.ExplainVerbosity.allPlansExecution
: exports.ExplainVerbosity.queryPlanner;
}
else {
this.verbosity = verbosity;
}
}
static fromOptions(options) {
if (options?.explain == null)
return;
const explain = options.explain;
if (typeof explain === 'boolean' || typeof explain === 'string') {
return new Explain(explain);
}
throw new error_1.MongoInvalidArgumentError('Field "explain" must be a string or a boolean');
}
}
exports.Explain = Explain;
//# sourceMappingURL=explain.js.map

1
node_modules/mongodb/lib/explain.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"explain.js","sourceRoot":"","sources":["../src/explain.ts"],"names":[],"mappings":";;;AAAA,mCAAoD;AAEpD,cAAc;AACD,QAAA,gBAAgB,GAAG,MAAM,CAAC,MAAM,CAAC;IAC5C,YAAY,EAAE,cAAc;IAC5B,oBAAoB,EAAE,sBAAsB;IAC5C,cAAc,EAAE,gBAAgB;IAChC,iBAAiB,EAAE,mBAAmB;CAC9B,CAAC,CAAC;AAmBZ,gBAAgB;AAChB,MAAa,OAAO;IAGlB,YAAY,SAA+B;QACzC,IAAI,OAAO,SAAS,KAAK,SAAS,EAAE;YAClC,IAAI,CAAC,SAAS,GAAG,SAAS;gBACxB,CAAC,CAAC,wBAAgB,CAAC,iBAAiB;gBACpC,CAAC,CAAC,wBAAgB,CAAC,YAAY,CAAC;SACnC;aAAM;YACL,IAAI,CAAC,SAAS,GAAG,SAAS,CAAC;SAC5B;IACH,CAAC;IAED,MAAM,CAAC,WAAW,CAAC,OAAwB;QACzC,IAAI,OAAO,EAAE,OAAO,IAAI,IAAI;YAAE,OAAO;QAErC,MAAM,OAAO,GAAG,OAAO,CAAC,OAAO,CAAC;QAChC,IAAI,OAAO,OAAO,KAAK,SAAS,IAAI,OAAO,OAAO,KAAK,QAAQ,EAAE;YAC/D,OAAO,IAAI,OAAO,CAAC,OAAO,CAAC,CAAC;SAC7B;QAED,MAAM,IAAI,iCAAyB,CAAC,+CAA+C,CAAC,CAAC;IACvF,CAAC;CACF;AAvBD,0BAuBC"}

313
node_modules/mongodb/lib/gridfs/download.js generated vendored Normal file
View file

@ -0,0 +1,313 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.GridFSBucketReadStream = void 0;
const stream_1 = require("stream");
const error_1 = require("../error");
/**
* A readable stream that enables you to read buffers from GridFS.
*
* Do not instantiate this class directly. Use `openDownloadStream()` instead.
* @public
*/
class GridFSBucketReadStream extends stream_1.Readable {
/**
* @param chunks - Handle for chunks collection
* @param files - Handle for files collection
* @param readPreference - The read preference to use
* @param filter - The filter to use to find the file document
* @internal
*/
constructor(chunks, files, readPreference, filter, options) {
super();
this.s = {
bytesToTrim: 0,
bytesToSkip: 0,
bytesRead: 0,
chunks,
expected: 0,
files,
filter,
init: false,
expectedEnd: 0,
options: {
start: 0,
end: 0,
...options
},
readPreference
};
}
/**
* Reads from the cursor and pushes to the stream.
* Private Impl, do not call directly
* @internal
*/
_read() {
if (this.destroyed)
return;
waitForFile(this, () => doRead(this));
}
/**
* Sets the 0-based offset in bytes to start streaming from. Throws
* an error if this stream has entered flowing mode
* (e.g. if you've already called `on('data')`)
*
* @param start - 0-based offset in bytes to start streaming from
*/
start(start = 0) {
throwIfInitialized(this);
this.s.options.start = start;
return this;
}
/**
* Sets the 0-based offset in bytes to start streaming from. Throws
* an error if this stream has entered flowing mode
* (e.g. if you've already called `on('data')`)
*
* @param end - Offset in bytes to stop reading at
*/
end(end = 0) {
throwIfInitialized(this);
this.s.options.end = end;
return this;
}
/**
* Marks this stream as aborted (will never push another `data` event)
* and kills the underlying cursor. Will emit the 'end' event, and then
* the 'close' event once the cursor is successfully killed.
*/
async abort() {
this.push(null);
this.destroyed = true;
if (this.s.cursor) {
try {
await this.s.cursor.close();
}
finally {
this.emit(GridFSBucketReadStream.CLOSE);
}
}
else {
if (!this.s.init) {
// If not initialized, fire close event because we will never
// get a cursor
this.emit(GridFSBucketReadStream.CLOSE);
}
}
}
}
exports.GridFSBucketReadStream = GridFSBucketReadStream;
/**
* An error occurred
* @event
*/
GridFSBucketReadStream.ERROR = 'error';
/**
* Fires when the stream loaded the file document corresponding to the provided id.
* @event
*/
GridFSBucketReadStream.FILE = 'file';
/**
* Emitted when a chunk of data is available to be consumed.
* @event
*/
GridFSBucketReadStream.DATA = 'data';
/**
* Fired when the stream is exhausted (no more data events).
* @event
*/
GridFSBucketReadStream.END = 'end';
/**
* Fired when the stream is exhausted and the underlying cursor is killed
* @event
*/
GridFSBucketReadStream.CLOSE = 'close';
function throwIfInitialized(stream) {
if (stream.s.init) {
throw new error_1.MongoGridFSStreamError('Options cannot be changed after the stream is initialized');
}
}
function doRead(stream) {
if (stream.destroyed)
return;
if (!stream.s.cursor)
return;
if (!stream.s.file)
return;
const handleReadResult = ({ error, doc }) => {
if (stream.destroyed) {
return;
}
if (error) {
stream.emit(GridFSBucketReadStream.ERROR, error);
return;
}
if (!doc) {
stream.push(null);
stream.s.cursor?.close().then(() => {
stream.emit(GridFSBucketReadStream.CLOSE);
}, error => {
stream.emit(GridFSBucketReadStream.ERROR, error);
});
return;
}
if (!stream.s.file)
return;
const bytesRemaining = stream.s.file.length - stream.s.bytesRead;
const expectedN = stream.s.expected++;
const expectedLength = Math.min(stream.s.file.chunkSize, bytesRemaining);
if (doc.n > expectedN) {
return stream.emit(GridFSBucketReadStream.ERROR, new error_1.MongoGridFSChunkError(`ChunkIsMissing: Got unexpected n: ${doc.n}, expected: ${expectedN}`));
}
if (doc.n < expectedN) {
return stream.emit(GridFSBucketReadStream.ERROR, new error_1.MongoGridFSChunkError(`ExtraChunk: Got unexpected n: ${doc.n}, expected: ${expectedN}`));
}
let buf = Buffer.isBuffer(doc.data) ? doc.data : doc.data.buffer;
if (buf.byteLength !== expectedLength) {
if (bytesRemaining <= 0) {
return stream.emit(GridFSBucketReadStream.ERROR, new error_1.MongoGridFSChunkError(`ExtraChunk: Got unexpected n: ${doc.n}, expected file length ${stream.s.file.length} bytes but already read ${stream.s.bytesRead} bytes`));
}
return stream.emit(GridFSBucketReadStream.ERROR, new error_1.MongoGridFSChunkError(`ChunkIsWrongSize: Got unexpected length: ${buf.byteLength}, expected: ${expectedLength}`));
}
stream.s.bytesRead += buf.byteLength;
if (buf.byteLength === 0) {
return stream.push(null);
}
let sliceStart = null;
let sliceEnd = null;
if (stream.s.bytesToSkip != null) {
sliceStart = stream.s.bytesToSkip;
stream.s.bytesToSkip = 0;
}
const atEndOfStream = expectedN === stream.s.expectedEnd - 1;
const bytesLeftToRead = stream.s.options.end - stream.s.bytesToSkip;
if (atEndOfStream && stream.s.bytesToTrim != null) {
sliceEnd = stream.s.file.chunkSize - stream.s.bytesToTrim;
}
else if (stream.s.options.end && bytesLeftToRead < doc.data.byteLength) {
sliceEnd = bytesLeftToRead;
}
if (sliceStart != null || sliceEnd != null) {
buf = buf.slice(sliceStart || 0, sliceEnd || buf.byteLength);
}
stream.push(buf);
return;
};
stream.s.cursor.next().then(doc => handleReadResult({ error: null, doc }), error => handleReadResult({ error, doc: null }));
}
function init(stream) {
const findOneOptions = {};
if (stream.s.readPreference) {
findOneOptions.readPreference = stream.s.readPreference;
}
if (stream.s.options && stream.s.options.sort) {
findOneOptions.sort = stream.s.options.sort;
}
if (stream.s.options && stream.s.options.skip) {
findOneOptions.skip = stream.s.options.skip;
}
const handleReadResult = ({ error, doc }) => {
if (error) {
return stream.emit(GridFSBucketReadStream.ERROR, error);
}
if (!doc) {
const identifier = stream.s.filter._id
? stream.s.filter._id.toString()
: stream.s.filter.filename;
const errmsg = `FileNotFound: file ${identifier} was not found`;
// TODO(NODE-3483)
const err = new error_1.MongoRuntimeError(errmsg);
err.code = 'ENOENT'; // TODO: NODE-3338 set property as part of constructor
return stream.emit(GridFSBucketReadStream.ERROR, err);
}
// If document is empty, kill the stream immediately and don't
// execute any reads
if (doc.length <= 0) {
stream.push(null);
return;
}
if (stream.destroyed) {
// If user destroys the stream before we have a cursor, wait
// until the query is done to say we're 'closed' because we can't
// cancel a query.
stream.emit(GridFSBucketReadStream.CLOSE);
return;
}
try {
stream.s.bytesToSkip = handleStartOption(stream, doc, stream.s.options);
}
catch (error) {
return stream.emit(GridFSBucketReadStream.ERROR, error);
}
const filter = { files_id: doc._id };
// Currently (MongoDB 3.4.4) skip function does not support the index,
// it needs to retrieve all the documents first and then skip them. (CS-25811)
// As work around we use $gte on the "n" field.
if (stream.s.options && stream.s.options.start != null) {
const skip = Math.floor(stream.s.options.start / doc.chunkSize);
if (skip > 0) {
filter['n'] = { $gte: skip };
}
}
stream.s.cursor = stream.s.chunks.find(filter).sort({ n: 1 });
if (stream.s.readPreference) {
stream.s.cursor.withReadPreference(stream.s.readPreference);
}
stream.s.expectedEnd = Math.ceil(doc.length / doc.chunkSize);
stream.s.file = doc;
try {
stream.s.bytesToTrim = handleEndOption(stream, doc, stream.s.cursor, stream.s.options);
}
catch (error) {
return stream.emit(GridFSBucketReadStream.ERROR, error);
}
stream.emit(GridFSBucketReadStream.FILE, doc);
return;
};
stream.s.files.findOne(stream.s.filter, findOneOptions).then(doc => handleReadResult({ error: null, doc }), error => handleReadResult({ error, doc: null }));
}
function waitForFile(stream, callback) {
if (stream.s.file) {
return callback();
}
if (!stream.s.init) {
init(stream);
stream.s.init = true;
}
stream.once('file', () => {
callback();
});
}
function handleStartOption(stream, doc, options) {
if (options && options.start != null) {
if (options.start > doc.length) {
throw new error_1.MongoInvalidArgumentError(`Stream start (${options.start}) must not be more than the length of the file (${doc.length})`);
}
if (options.start < 0) {
throw new error_1.MongoInvalidArgumentError(`Stream start (${options.start}) must not be negative`);
}
if (options.end != null && options.end < options.start) {
throw new error_1.MongoInvalidArgumentError(`Stream start (${options.start}) must not be greater than stream end (${options.end})`);
}
stream.s.bytesRead = Math.floor(options.start / doc.chunkSize) * doc.chunkSize;
stream.s.expected = Math.floor(options.start / doc.chunkSize);
return options.start - stream.s.bytesRead;
}
throw new error_1.MongoInvalidArgumentError('Start option must be defined');
}
function handleEndOption(stream, doc, cursor, options) {
if (options && options.end != null) {
if (options.end > doc.length) {
throw new error_1.MongoInvalidArgumentError(`Stream end (${options.end}) must not be more than the length of the file (${doc.length})`);
}
if (options.start == null || options.start < 0) {
throw new error_1.MongoInvalidArgumentError(`Stream end (${options.end}) must not be negative`);
}
const start = options.start != null ? Math.floor(options.start / doc.chunkSize) : 0;
cursor.limit(Math.ceil(options.end / doc.chunkSize) - start);
stream.s.expectedEnd = Math.ceil(options.end / doc.chunkSize);
return Math.ceil(options.end / doc.chunkSize) * doc.chunkSize - options.end;
}
throw new error_1.MongoInvalidArgumentError('End option must be defined');
}
//# sourceMappingURL=download.js.map

1
node_modules/mongodb/lib/gridfs/download.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

128
node_modules/mongodb/lib/gridfs/index.js generated vendored Normal file
View file

@ -0,0 +1,128 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.GridFSBucket = void 0;
const error_1 = require("../error");
const mongo_types_1 = require("../mongo_types");
const write_concern_1 = require("../write_concern");
const download_1 = require("./download");
const upload_1 = require("./upload");
const DEFAULT_GRIDFS_BUCKET_OPTIONS = {
bucketName: 'fs',
chunkSizeBytes: 255 * 1024
};
/**
* Constructor for a streaming GridFS interface
* @public
*/
class GridFSBucket extends mongo_types_1.TypedEventEmitter {
constructor(db, options) {
super();
this.setMaxListeners(0);
const privateOptions = {
...DEFAULT_GRIDFS_BUCKET_OPTIONS,
...options,
writeConcern: write_concern_1.WriteConcern.fromOptions(options)
};
this.s = {
db,
options: privateOptions,
_chunksCollection: db.collection(privateOptions.bucketName + '.chunks'),
_filesCollection: db.collection(privateOptions.bucketName + '.files'),
checkedIndexes: false,
calledOpenUploadStream: false
};
}
/**
* Returns a writable stream (GridFSBucketWriteStream) for writing
* buffers to GridFS. The stream's 'id' property contains the resulting
* file's id.
*
* @param filename - The value of the 'filename' key in the files doc
* @param options - Optional settings.
*/
openUploadStream(filename, options) {
return new upload_1.GridFSBucketWriteStream(this, filename, options);
}
/**
* Returns a writable stream (GridFSBucketWriteStream) for writing
* buffers to GridFS for a custom file id. The stream's 'id' property contains the resulting
* file's id.
*/
openUploadStreamWithId(id, filename, options) {
return new upload_1.GridFSBucketWriteStream(this, filename, { ...options, id });
}
/** Returns a readable stream (GridFSBucketReadStream) for streaming file data from GridFS. */
openDownloadStream(id, options) {
return new download_1.GridFSBucketReadStream(this.s._chunksCollection, this.s._filesCollection, this.s.options.readPreference, { _id: id }, options);
}
/**
* Deletes a file with the given id
*
* @param id - The id of the file doc
*/
async delete(id) {
const { deletedCount } = await this.s._filesCollection.deleteOne({ _id: id });
// Delete orphaned chunks before returning FileNotFound
await this.s._chunksCollection.deleteMany({ files_id: id });
if (deletedCount === 0) {
// TODO(NODE-3483): Replace with more appropriate error
// Consider creating new error MongoGridFSFileNotFoundError
throw new error_1.MongoRuntimeError(`File not found for id ${id}`);
}
}
/** Convenience wrapper around find on the files collection */
find(filter = {}, options = {}) {
return this.s._filesCollection.find(filter, options);
}
/**
* Returns a readable stream (GridFSBucketReadStream) for streaming the
* file with the given name from GridFS. If there are multiple files with
* the same name, this will stream the most recent file with the given name
* (as determined by the `uploadDate` field). You can set the `revision`
* option to change this behavior.
*/
openDownloadStreamByName(filename, options) {
let sort = { uploadDate: -1 };
let skip = undefined;
if (options && options.revision != null) {
if (options.revision >= 0) {
sort = { uploadDate: 1 };
skip = options.revision;
}
else {
skip = -options.revision - 1;
}
}
return new download_1.GridFSBucketReadStream(this.s._chunksCollection, this.s._filesCollection, this.s.options.readPreference, { filename }, { ...options, sort, skip });
}
/**
* Renames the file with the given _id to the given string
*
* @param id - the id of the file to rename
* @param filename - new name for the file
*/
async rename(id, filename) {
const filter = { _id: id };
const update = { $set: { filename } };
const { matchedCount } = await this.s._filesCollection.updateOne(filter, update);
if (matchedCount === 0) {
throw new error_1.MongoRuntimeError(`File with id ${id} not found`);
}
}
/** Removes this bucket's files collection, followed by its chunks collection. */
async drop() {
await this.s._filesCollection.drop();
await this.s._chunksCollection.drop();
}
}
exports.GridFSBucket = GridFSBucket;
/**
* When the first call to openUploadStream is made, the upload stream will
* check to see if it needs to create the proper indexes on the chunks and
* files collections. This event is fired either when 1) it determines that
* no index creation is necessary, 2) when it successfully creates the
* necessary indexes.
* @event
*/
GridFSBucket.INDEX = 'index';
//# sourceMappingURL=index.js.map

1
node_modules/mongodb/lib/gridfs/index.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"index.js","sourceRoot":"","sources":["../../src/gridfs/index.ts"],"names":[],"mappings":";;;AAIA,oCAA6C;AAC7C,gDAA2D;AAG3D,oDAAqE;AAErE,yCAKoB;AACpB,qCAAgG;AAEhG,MAAM,6BAA6B,GAG/B;IACF,UAAU,EAAE,IAAI;IAChB,cAAc,EAAE,GAAG,GAAG,IAAI;CAC3B,CAAC;AAgCF;;;GAGG;AACH,MAAa,YAAa,SAAQ,+BAAqC;IAcrE,YAAY,EAAM,EAAE,OAA6B;QAC/C,KAAK,EAAE,CAAC;QACR,IAAI,CAAC,eAAe,CAAC,CAAC,CAAC,CAAC;QACxB,MAAM,cAAc,GAAG;YACrB,GAAG,6BAA6B;YAChC,GAAG,OAAO;YACV,YAAY,EAAE,4BAAY,CAAC,WAAW,CAAC,OAAO,CAAC;SAChD,CAAC;QACF,IAAI,CAAC,CAAC,GAAG;YACP,EAAE;YACF,OAAO,EAAE,cAAc;YACvB,iBAAiB,EAAE,EAAE,CAAC,UAAU,CAAc,cAAc,CAAC,UAAU,GAAG,SAAS,CAAC;YACpF,gBAAgB,EAAE,EAAE,CAAC,UAAU,CAAa,cAAc,CAAC,UAAU,GAAG,QAAQ,CAAC;YACjF,cAAc,EAAE,KAAK;YACrB,sBAAsB,EAAE,KAAK;SAC9B,CAAC;IACJ,CAAC;IAED;;;;;;;OAOG;IAEH,gBAAgB,CACd,QAAgB,EAChB,OAAwC;QAExC,OAAO,IAAI,gCAAuB,CAAC,IAAI,EAAE,QAAQ,EAAE,OAAO,CAAC,CAAC;IAC9D,CAAC;IAED;;;;OAIG;IACH,sBAAsB,CACpB,EAAY,EACZ,QAAgB,EAChB,OAAwC;QAExC,OAAO,IAAI,gCAAuB,CAAC,IAAI,EAAE,QAAQ,EAAE,EAAE,GAAG,OAAO,EAAE,EAAE,EAAE,CAAC,CAAC;IACzE,CAAC;IAED,8FAA8F;IAC9F,kBAAkB,CAChB,EAAY,EACZ,OAAuC;QAEvC,OAAO,IAAI,iCAAsB,CAC/B,IAAI,CAAC,CAAC,CAAC,iBAAiB,EACxB,IAAI,CAAC,CAAC,CAAC,gBAAgB,EACvB,IAAI,CAAC,CAAC,CAAC,OAAO,CAAC,cAAc,EAC7B,EAAE,GAAG,EAAE,EAAE,EAAE,EACX,OAAO,CACR,CAAC;IACJ,CAAC;IAED;;;;OAIG;IACH,KAAK,CAAC,MAAM,CAAC,EAAY;QACvB,MAAM,EAAE,YAAY,EAAE,GAAG,MAAM,IAAI,CAAC,CAAC,CAAC,gBAAgB,CAAC,SAAS,CAAC,EAAE,GAAG,EAAE,EAAE,EAAE,CAAC,CAAC;QAE9E,uDAAuD;QACvD,MAAM,IAAI,CAAC,CAAC,CAAC,iBAAiB,CAAC,UAAU,CAAC,EAAE,QAAQ,EAAE,EAAE,EAAE,CAAC,CAAC;QAE5D,IAAI,YAAY,KAAK,CAAC,EAAE;YACtB,uDAAuD;YACvD,2DAA2D;YAC3D,MAAM,IAAI,yBAAiB,CAAC,yBAAyB,EAAE,EAAE,CAAC,CAAC;SAC5D;IACH,CAAC;IAED,8DAA8D;IAC9D,IAAI,CAAC,SAA6B,EAAE,EAAE,UAAuB,EAAE;QAC7D,OAAO,IAAI,CAAC,CAAC,CAAC,gBAAgB,CAAC,IAAI,CAAC,MAAM,EAAE,OAAO,CAAC,CAAC;IACvD,CAAC;IAED;;;;;;OAMG;IACH,wBAAwB,CACtB,QAAgB,EAChB,OAAmD;QAEnD,IAAI,IAAI,GAAS,EAAE,UAAU,EAAE,CAAC,CAAC,EAAE,CAAC;QACpC,IAAI,IAAI,GAAG,SAAS,CAAC;QACrB,IAAI,OAAO,IAAI,OAAO,CAAC,QAAQ,IAAI,IAAI,EAAE;YACvC,IAAI,OAAO,CAAC,QAAQ,IAAI,CAAC,EAAE;gBACzB,IAAI,GAAG,EAAE,UAAU,EAAE,CAAC,EAAE,CAAC;gBACzB,IAAI,GAAG,OAAO,CAAC,QAAQ,CAAC;aACzB;iBAAM;gBACL,IAAI,GAAG,CAAC,OAAO,CAAC,QAAQ,GAAG,CAAC,CAAC;aAC9B;SACF;QACD,OAAO,IAAI,iCAAsB,CAC/B,IAAI,CAAC,CAAC,CAAC,iBAAiB,EACxB,IAAI,CAAC,CAAC,CAAC,gBAAgB,EACvB,IAAI,CAAC,CAAC,CAAC,OAAO,CAAC,cAAc,EAC7B,EAAE,QAAQ,EAAE,EACZ,EAAE,GAAG,OAAO,EAAE,IAAI,EAAE,IAAI,EAAE,CAC3B,CAAC;IACJ,CAAC;IAED;;;;;OAKG;IACH,KAAK,CAAC,MAAM,CAAC,EAAY,EAAE,QAAgB;QACzC,MAAM,MAAM,GAAG,EAAE,GAAG,EAAE,EAAE,EAAE,CAAC;QAC3B,MAAM,MAAM,GAAG,EAAE,IAAI,EAAE,EAAE,QAAQ,EAAE,EAAE,CAAC;QACtC,MAAM,EAAE,YAAY,EAAE,GAAG,MAAM,IAAI,CAAC,CAAC,CAAC,gBAAgB,CAAC,SAAS,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;QACjF,IAAI,YAAY,KAAK,CAAC,EAAE;YACtB,MAAM,IAAI,yBAAiB,CAAC,gBAAgB,EAAE,YAAY,CAAC,CAAC;SAC7D;IACH,CAAC;IAED,iFAAiF;IACjF,KAAK,CAAC,IAAI;QACR,MAAM,IAAI,CAAC,CAAC,CAAC,gBAAgB,CAAC,IAAI,EAAE,CAAC;QACrC,MAAM,IAAI,CAAC,CAAC,CAAC,iBAAiB,CAAC,IAAI,EAAE,CAAC;IACxC,CAAC;;AAnJH,oCAoJC;AAhJC;;;;;;;GAOG;AACa,kBAAK,GAAG,OAAgB,CAAC"}

342
node_modules/mongodb/lib/gridfs/upload.js generated vendored Normal file
View file

@ -0,0 +1,342 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.GridFSBucketWriteStream = void 0;
const stream_1 = require("stream");
const bson_1 = require("../bson");
const error_1 = require("../error");
const write_concern_1 = require("./../write_concern");
/**
* A writable stream that enables you to write buffers to GridFS.
*
* Do not instantiate this class directly. Use `openUploadStream()` instead.
* @public
*/
class GridFSBucketWriteStream extends stream_1.Writable {
/**
* @param bucket - Handle for this stream's corresponding bucket
* @param filename - The value of the 'filename' key in the files doc
* @param options - Optional settings.
* @internal
*/
constructor(bucket, filename, options) {
super();
options = options ?? {};
this.bucket = bucket;
this.chunks = bucket.s._chunksCollection;
this.filename = filename;
this.files = bucket.s._filesCollection;
this.options = options;
this.writeConcern = write_concern_1.WriteConcern.fromOptions(options) || bucket.s.options.writeConcern;
// Signals the write is all done
this.done = false;
this.id = options.id ? options.id : new bson_1.ObjectId();
// properly inherit the default chunksize from parent
this.chunkSizeBytes = options.chunkSizeBytes || this.bucket.s.options.chunkSizeBytes;
this.bufToStore = Buffer.alloc(this.chunkSizeBytes);
this.length = 0;
this.n = 0;
this.pos = 0;
this.state = {
streamEnd: false,
outstandingRequests: 0,
errored: false,
aborted: false
};
if (!this.bucket.s.calledOpenUploadStream) {
this.bucket.s.calledOpenUploadStream = true;
checkIndexes(this).then(() => {
this.bucket.s.checkedIndexes = true;
this.bucket.emit('index');
}, () => null);
}
}
write(chunk, encodingOrCallback, callback) {
const encoding = typeof encodingOrCallback === 'function' ? undefined : encodingOrCallback;
callback = typeof encodingOrCallback === 'function' ? encodingOrCallback : callback;
return waitForIndexes(this, () => doWrite(this, chunk, encoding, callback));
}
/**
* Places this write stream into an aborted state (all future writes fail)
* and deletes all chunks that have already been written.
*/
async abort() {
if (this.state.streamEnd) {
// TODO(NODE-3485): Replace with MongoGridFSStreamClosed
throw new error_1.MongoAPIError('Cannot abort a stream that has already completed');
}
if (this.state.aborted) {
// TODO(NODE-3485): Replace with MongoGridFSStreamClosed
throw new error_1.MongoAPIError('Cannot call abort() on a stream twice');
}
this.state.aborted = true;
await this.chunks.deleteMany({ files_id: this.id });
}
end(chunkOrCallback, encodingOrCallback, callback) {
const chunk = typeof chunkOrCallback === 'function' ? undefined : chunkOrCallback;
const encoding = typeof encodingOrCallback === 'function' ? undefined : encodingOrCallback;
callback =
typeof chunkOrCallback === 'function'
? chunkOrCallback
: typeof encodingOrCallback === 'function'
? encodingOrCallback
: callback;
if (this.state.streamEnd || checkAborted(this, callback))
return this;
this.state.streamEnd = true;
if (callback) {
this.once(GridFSBucketWriteStream.FINISH, (result) => {
if (callback)
callback(undefined, result);
});
}
if (!chunk) {
waitForIndexes(this, () => !!writeRemnant(this));
return this;
}
this.write(chunk, encoding, () => {
writeRemnant(this);
});
return this;
}
}
exports.GridFSBucketWriteStream = GridFSBucketWriteStream;
/** @event */
GridFSBucketWriteStream.CLOSE = 'close';
/** @event */
GridFSBucketWriteStream.ERROR = 'error';
/**
* `end()` was called and the write stream successfully wrote the file metadata and all the chunks to MongoDB.
* @event
*/
GridFSBucketWriteStream.FINISH = 'finish';
function __handleError(stream, error, callback) {
if (stream.state.errored) {
return;
}
stream.state.errored = true;
if (callback) {
return callback(error);
}
stream.emit(GridFSBucketWriteStream.ERROR, error);
}
function createChunkDoc(filesId, n, data) {
return {
_id: new bson_1.ObjectId(),
files_id: filesId,
n,
data
};
}
async function checkChunksIndex(stream) {
const index = { files_id: 1, n: 1 };
let indexes;
try {
indexes = await stream.chunks.listIndexes().toArray();
}
catch (error) {
if (error instanceof error_1.MongoError && error.code === error_1.MONGODB_ERROR_CODES.NamespaceNotFound) {
indexes = [];
}
else {
throw error;
}
}
const hasChunksIndex = !!indexes.find(index => {
const keys = Object.keys(index.key);
if (keys.length === 2 && index.key.files_id === 1 && index.key.n === 1) {
return true;
}
return false;
});
if (!hasChunksIndex) {
const writeConcernOptions = getWriteOptions(stream);
await stream.chunks.createIndex(index, {
...writeConcernOptions,
background: true,
unique: true
});
}
}
function checkDone(stream, callback) {
if (stream.done)
return true;
if (stream.state.streamEnd && stream.state.outstandingRequests === 0 && !stream.state.errored) {
// Set done so we do not trigger duplicate createFilesDoc
stream.done = true;
// Create a new files doc
const filesDoc = createFilesDoc(stream.id, stream.length, stream.chunkSizeBytes, stream.filename, stream.options.contentType, stream.options.aliases, stream.options.metadata);
if (checkAborted(stream, callback)) {
return false;
}
stream.files.insertOne(filesDoc, getWriteOptions(stream)).then(() => {
stream.emit(GridFSBucketWriteStream.FINISH, filesDoc);
stream.emit(GridFSBucketWriteStream.CLOSE);
}, error => {
return __handleError(stream, error, callback);
});
return true;
}
return false;
}
async function checkIndexes(stream) {
const doc = await stream.files.findOne({}, { projection: { _id: 1 } });
if (doc != null) {
// If at least one document exists assume the collection has the required index
return;
}
const index = { filename: 1, uploadDate: 1 };
let indexes;
try {
indexes = await stream.files.listIndexes().toArray();
}
catch (error) {
if (error instanceof error_1.MongoError && error.code === error_1.MONGODB_ERROR_CODES.NamespaceNotFound) {
indexes = [];
}
else {
throw error;
}
}
const hasFileIndex = !!indexes.find(index => {
const keys = Object.keys(index.key);
if (keys.length === 2 && index.key.filename === 1 && index.key.uploadDate === 1) {
return true;
}
return false;
});
if (!hasFileIndex) {
await stream.files.createIndex(index, { background: false });
}
await checkChunksIndex(stream);
}
function createFilesDoc(_id, length, chunkSize, filename, contentType, aliases, metadata) {
const ret = {
_id,
length,
chunkSize,
uploadDate: new Date(),
filename
};
if (contentType) {
ret.contentType = contentType;
}
if (aliases) {
ret.aliases = aliases;
}
if (metadata) {
ret.metadata = metadata;
}
return ret;
}
function doWrite(stream, chunk, encoding, callback) {
if (checkAborted(stream, callback)) {
return false;
}
const inputBuf = Buffer.isBuffer(chunk) ? chunk : Buffer.from(chunk, encoding);
stream.length += inputBuf.length;
// Input is small enough to fit in our buffer
if (stream.pos + inputBuf.length < stream.chunkSizeBytes) {
inputBuf.copy(stream.bufToStore, stream.pos);
stream.pos += inputBuf.length;
callback && callback();
// Note that we reverse the typical semantics of write's return value
// to be compatible with node's `.pipe()` function.
// True means client can keep writing.
return true;
}
// Otherwise, buffer is too big for current chunk, so we need to flush
// to MongoDB.
let inputBufRemaining = inputBuf.length;
let spaceRemaining = stream.chunkSizeBytes - stream.pos;
let numToCopy = Math.min(spaceRemaining, inputBuf.length);
let outstandingRequests = 0;
while (inputBufRemaining > 0) {
const inputBufPos = inputBuf.length - inputBufRemaining;
inputBuf.copy(stream.bufToStore, stream.pos, inputBufPos, inputBufPos + numToCopy);
stream.pos += numToCopy;
spaceRemaining -= numToCopy;
let doc;
if (spaceRemaining === 0) {
doc = createChunkDoc(stream.id, stream.n, Buffer.from(stream.bufToStore));
++stream.state.outstandingRequests;
++outstandingRequests;
if (checkAborted(stream, callback)) {
return false;
}
stream.chunks.insertOne(doc, getWriteOptions(stream)).then(() => {
--stream.state.outstandingRequests;
--outstandingRequests;
if (!outstandingRequests) {
stream.emit('drain', doc);
callback && callback();
checkDone(stream);
}
}, error => {
return __handleError(stream, error);
});
spaceRemaining = stream.chunkSizeBytes;
stream.pos = 0;
++stream.n;
}
inputBufRemaining -= numToCopy;
numToCopy = Math.min(spaceRemaining, inputBufRemaining);
}
// Note that we reverse the typical semantics of write's return value
// to be compatible with node's `.pipe()` function.
// False means the client should wait for the 'drain' event.
return false;
}
function getWriteOptions(stream) {
const obj = {};
if (stream.writeConcern) {
obj.writeConcern = {
w: stream.writeConcern.w,
wtimeout: stream.writeConcern.wtimeout,
j: stream.writeConcern.j
};
}
return obj;
}
function waitForIndexes(stream, callback) {
if (stream.bucket.s.checkedIndexes) {
return callback(false);
}
stream.bucket.once('index', () => {
callback(true);
});
return true;
}
function writeRemnant(stream, callback) {
// Buffer is empty, so don't bother to insert
if (stream.pos === 0) {
return checkDone(stream, callback);
}
++stream.state.outstandingRequests;
// Create a new buffer to make sure the buffer isn't bigger than it needs
// to be.
const remnant = Buffer.alloc(stream.pos);
stream.bufToStore.copy(remnant, 0, 0, stream.pos);
const doc = createChunkDoc(stream.id, stream.n, remnant);
// If the stream was aborted, do not write remnant
if (checkAborted(stream, callback)) {
return false;
}
stream.chunks.insertOne(doc, getWriteOptions(stream)).then(() => {
--stream.state.outstandingRequests;
checkDone(stream);
}, error => {
return __handleError(stream, error);
});
return true;
}
function checkAborted(stream, callback) {
if (stream.state.aborted) {
if (typeof callback === 'function') {
// TODO(NODE-3485): Replace with MongoGridFSStreamClosedError
callback(new error_1.MongoAPIError('Stream has been aborted'));
}
return true;
}
return false;
}
//# sourceMappingURL=upload.js.map

1
node_modules/mongodb/lib/gridfs/upload.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

161
node_modules/mongodb/lib/index.js generated vendored Normal file
View file

@ -0,0 +1,161 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.Admin = exports.AbstractCursor = exports.MongoWriteConcernError = exports.MongoUnexpectedServerResponseError = exports.MongoTransactionError = exports.MongoTopologyClosedError = exports.MongoTailableCursorError = exports.MongoSystemError = exports.MongoServerSelectionError = exports.MongoServerError = exports.MongoServerClosedError = exports.MongoRuntimeError = exports.MongoParseError = exports.MongoNotConnectedError = exports.MongoNetworkTimeoutError = exports.MongoNetworkError = exports.MongoMissingDependencyError = exports.MongoMissingCredentialsError = exports.MongoKerberosError = exports.MongoInvalidArgumentError = exports.MongoGridFSStreamError = exports.MongoGridFSChunkError = exports.MongoExpiredSessionError = exports.MongoError = exports.MongoDriverError = exports.MongoDecompressionError = exports.MongoCursorInUseError = exports.MongoCursorExhaustedError = exports.MongoCompatibilityError = exports.MongoChangeStreamError = exports.MongoBatchReExecutionError = exports.MongoAWSError = exports.MongoAPIError = exports.ChangeStreamCursor = exports.MongoBulkWriteError = exports.Timestamp = exports.ObjectId = exports.MinKey = exports.MaxKey = exports.Long = exports.Int32 = exports.Double = exports.Decimal128 = exports.DBRef = exports.Code = exports.BSONType = exports.BSONSymbol = exports.BSONRegExp = exports.Binary = exports.BSON = void 0;
exports.ServerDescriptionChangedEvent = exports.ServerClosedEvent = exports.ConnectionReadyEvent = exports.ConnectionPoolReadyEvent = exports.ConnectionPoolMonitoringEvent = exports.ConnectionPoolCreatedEvent = exports.ConnectionPoolClosedEvent = exports.ConnectionPoolClearedEvent = exports.ConnectionCreatedEvent = exports.ConnectionClosedEvent = exports.ConnectionCheckOutStartedEvent = exports.ConnectionCheckOutFailedEvent = exports.ConnectionCheckedOutEvent = exports.ConnectionCheckedInEvent = exports.CommandSucceededEvent = exports.CommandStartedEvent = exports.CommandFailedEvent = exports.WriteConcern = exports.ReadPreference = exports.ReadConcern = exports.TopologyType = exports.ServerType = exports.ReadPreferenceMode = exports.ReadConcernLevel = exports.ProfilingLevel = exports.ReturnDocument = exports.ServerApiVersion = exports.ExplainVerbosity = exports.MongoErrorLabel = exports.AutoEncryptionLoggerLevel = exports.CURSOR_FLAGS = exports.Compressor = exports.AuthMechanism = exports.GSSAPICanonicalizationValue = exports.BatchType = exports.UnorderedBulkOperation = exports.OrderedBulkOperation = exports.MongoClient = exports.ListIndexesCursor = exports.ListCollectionsCursor = exports.GridFSBucketWriteStream = exports.GridFSBucketReadStream = exports.GridFSBucket = exports.FindCursor = exports.Db = exports.Collection = exports.ClientSession = exports.ChangeStream = exports.CancellationToken = exports.AggregationCursor = void 0;
exports.SrvPollingEvent = exports.TopologyOpeningEvent = exports.TopologyDescriptionChangedEvent = exports.TopologyClosedEvent = exports.ServerOpeningEvent = exports.ServerHeartbeatSucceededEvent = exports.ServerHeartbeatStartedEvent = exports.ServerHeartbeatFailedEvent = void 0;
const admin_1 = require("./admin");
Object.defineProperty(exports, "Admin", { enumerable: true, get: function () { return admin_1.Admin; } });
const ordered_1 = require("./bulk/ordered");
Object.defineProperty(exports, "OrderedBulkOperation", { enumerable: true, get: function () { return ordered_1.OrderedBulkOperation; } });
const unordered_1 = require("./bulk/unordered");
Object.defineProperty(exports, "UnorderedBulkOperation", { enumerable: true, get: function () { return unordered_1.UnorderedBulkOperation; } });
const change_stream_1 = require("./change_stream");
Object.defineProperty(exports, "ChangeStream", { enumerable: true, get: function () { return change_stream_1.ChangeStream; } });
const collection_1 = require("./collection");
Object.defineProperty(exports, "Collection", { enumerable: true, get: function () { return collection_1.Collection; } });
const abstract_cursor_1 = require("./cursor/abstract_cursor");
Object.defineProperty(exports, "AbstractCursor", { enumerable: true, get: function () { return abstract_cursor_1.AbstractCursor; } });
const aggregation_cursor_1 = require("./cursor/aggregation_cursor");
Object.defineProperty(exports, "AggregationCursor", { enumerable: true, get: function () { return aggregation_cursor_1.AggregationCursor; } });
const find_cursor_1 = require("./cursor/find_cursor");
Object.defineProperty(exports, "FindCursor", { enumerable: true, get: function () { return find_cursor_1.FindCursor; } });
const list_collections_cursor_1 = require("./cursor/list_collections_cursor");
Object.defineProperty(exports, "ListCollectionsCursor", { enumerable: true, get: function () { return list_collections_cursor_1.ListCollectionsCursor; } });
const list_indexes_cursor_1 = require("./cursor/list_indexes_cursor");
Object.defineProperty(exports, "ListIndexesCursor", { enumerable: true, get: function () { return list_indexes_cursor_1.ListIndexesCursor; } });
const db_1 = require("./db");
Object.defineProperty(exports, "Db", { enumerable: true, get: function () { return db_1.Db; } });
const gridfs_1 = require("./gridfs");
Object.defineProperty(exports, "GridFSBucket", { enumerable: true, get: function () { return gridfs_1.GridFSBucket; } });
const download_1 = require("./gridfs/download");
Object.defineProperty(exports, "GridFSBucketReadStream", { enumerable: true, get: function () { return download_1.GridFSBucketReadStream; } });
const upload_1 = require("./gridfs/upload");
Object.defineProperty(exports, "GridFSBucketWriteStream", { enumerable: true, get: function () { return upload_1.GridFSBucketWriteStream; } });
const mongo_client_1 = require("./mongo_client");
Object.defineProperty(exports, "MongoClient", { enumerable: true, get: function () { return mongo_client_1.MongoClient; } });
const mongo_types_1 = require("./mongo_types");
Object.defineProperty(exports, "CancellationToken", { enumerable: true, get: function () { return mongo_types_1.CancellationToken; } });
const sessions_1 = require("./sessions");
Object.defineProperty(exports, "ClientSession", { enumerable: true, get: function () { return sessions_1.ClientSession; } });
/** @public */
var bson_1 = require("./bson");
Object.defineProperty(exports, "BSON", { enumerable: true, get: function () { return bson_1.BSON; } });
var bson_2 = require("./bson");
Object.defineProperty(exports, "Binary", { enumerable: true, get: function () { return bson_2.Binary; } });
Object.defineProperty(exports, "BSONRegExp", { enumerable: true, get: function () { return bson_2.BSONRegExp; } });
Object.defineProperty(exports, "BSONSymbol", { enumerable: true, get: function () { return bson_2.BSONSymbol; } });
Object.defineProperty(exports, "BSONType", { enumerable: true, get: function () { return bson_2.BSONType; } });
Object.defineProperty(exports, "Code", { enumerable: true, get: function () { return bson_2.Code; } });
Object.defineProperty(exports, "DBRef", { enumerable: true, get: function () { return bson_2.DBRef; } });
Object.defineProperty(exports, "Decimal128", { enumerable: true, get: function () { return bson_2.Decimal128; } });
Object.defineProperty(exports, "Double", { enumerable: true, get: function () { return bson_2.Double; } });
Object.defineProperty(exports, "Int32", { enumerable: true, get: function () { return bson_2.Int32; } });
Object.defineProperty(exports, "Long", { enumerable: true, get: function () { return bson_2.Long; } });
Object.defineProperty(exports, "MaxKey", { enumerable: true, get: function () { return bson_2.MaxKey; } });
Object.defineProperty(exports, "MinKey", { enumerable: true, get: function () { return bson_2.MinKey; } });
Object.defineProperty(exports, "ObjectId", { enumerable: true, get: function () { return bson_2.ObjectId; } });
Object.defineProperty(exports, "Timestamp", { enumerable: true, get: function () { return bson_2.Timestamp; } });
var common_1 = require("./bulk/common");
Object.defineProperty(exports, "MongoBulkWriteError", { enumerable: true, get: function () { return common_1.MongoBulkWriteError; } });
var change_stream_cursor_1 = require("./cursor/change_stream_cursor");
Object.defineProperty(exports, "ChangeStreamCursor", { enumerable: true, get: function () { return change_stream_cursor_1.ChangeStreamCursor; } });
var error_1 = require("./error");
Object.defineProperty(exports, "MongoAPIError", { enumerable: true, get: function () { return error_1.MongoAPIError; } });
Object.defineProperty(exports, "MongoAWSError", { enumerable: true, get: function () { return error_1.MongoAWSError; } });
Object.defineProperty(exports, "MongoBatchReExecutionError", { enumerable: true, get: function () { return error_1.MongoBatchReExecutionError; } });
Object.defineProperty(exports, "MongoChangeStreamError", { enumerable: true, get: function () { return error_1.MongoChangeStreamError; } });
Object.defineProperty(exports, "MongoCompatibilityError", { enumerable: true, get: function () { return error_1.MongoCompatibilityError; } });
Object.defineProperty(exports, "MongoCursorExhaustedError", { enumerable: true, get: function () { return error_1.MongoCursorExhaustedError; } });
Object.defineProperty(exports, "MongoCursorInUseError", { enumerable: true, get: function () { return error_1.MongoCursorInUseError; } });
Object.defineProperty(exports, "MongoDecompressionError", { enumerable: true, get: function () { return error_1.MongoDecompressionError; } });
Object.defineProperty(exports, "MongoDriverError", { enumerable: true, get: function () { return error_1.MongoDriverError; } });
Object.defineProperty(exports, "MongoError", { enumerable: true, get: function () { return error_1.MongoError; } });
Object.defineProperty(exports, "MongoExpiredSessionError", { enumerable: true, get: function () { return error_1.MongoExpiredSessionError; } });
Object.defineProperty(exports, "MongoGridFSChunkError", { enumerable: true, get: function () { return error_1.MongoGridFSChunkError; } });
Object.defineProperty(exports, "MongoGridFSStreamError", { enumerable: true, get: function () { return error_1.MongoGridFSStreamError; } });
Object.defineProperty(exports, "MongoInvalidArgumentError", { enumerable: true, get: function () { return error_1.MongoInvalidArgumentError; } });
Object.defineProperty(exports, "MongoKerberosError", { enumerable: true, get: function () { return error_1.MongoKerberosError; } });
Object.defineProperty(exports, "MongoMissingCredentialsError", { enumerable: true, get: function () { return error_1.MongoMissingCredentialsError; } });
Object.defineProperty(exports, "MongoMissingDependencyError", { enumerable: true, get: function () { return error_1.MongoMissingDependencyError; } });
Object.defineProperty(exports, "MongoNetworkError", { enumerable: true, get: function () { return error_1.MongoNetworkError; } });
Object.defineProperty(exports, "MongoNetworkTimeoutError", { enumerable: true, get: function () { return error_1.MongoNetworkTimeoutError; } });
Object.defineProperty(exports, "MongoNotConnectedError", { enumerable: true, get: function () { return error_1.MongoNotConnectedError; } });
Object.defineProperty(exports, "MongoParseError", { enumerable: true, get: function () { return error_1.MongoParseError; } });
Object.defineProperty(exports, "MongoRuntimeError", { enumerable: true, get: function () { return error_1.MongoRuntimeError; } });
Object.defineProperty(exports, "MongoServerClosedError", { enumerable: true, get: function () { return error_1.MongoServerClosedError; } });
Object.defineProperty(exports, "MongoServerError", { enumerable: true, get: function () { return error_1.MongoServerError; } });
Object.defineProperty(exports, "MongoServerSelectionError", { enumerable: true, get: function () { return error_1.MongoServerSelectionError; } });
Object.defineProperty(exports, "MongoSystemError", { enumerable: true, get: function () { return error_1.MongoSystemError; } });
Object.defineProperty(exports, "MongoTailableCursorError", { enumerable: true, get: function () { return error_1.MongoTailableCursorError; } });
Object.defineProperty(exports, "MongoTopologyClosedError", { enumerable: true, get: function () { return error_1.MongoTopologyClosedError; } });
Object.defineProperty(exports, "MongoTransactionError", { enumerable: true, get: function () { return error_1.MongoTransactionError; } });
Object.defineProperty(exports, "MongoUnexpectedServerResponseError", { enumerable: true, get: function () { return error_1.MongoUnexpectedServerResponseError; } });
Object.defineProperty(exports, "MongoWriteConcernError", { enumerable: true, get: function () { return error_1.MongoWriteConcernError; } });
// enums
var common_2 = require("./bulk/common");
Object.defineProperty(exports, "BatchType", { enumerable: true, get: function () { return common_2.BatchType; } });
var gssapi_1 = require("./cmap/auth/gssapi");
Object.defineProperty(exports, "GSSAPICanonicalizationValue", { enumerable: true, get: function () { return gssapi_1.GSSAPICanonicalizationValue; } });
var providers_1 = require("./cmap/auth/providers");
Object.defineProperty(exports, "AuthMechanism", { enumerable: true, get: function () { return providers_1.AuthMechanism; } });
var compression_1 = require("./cmap/wire_protocol/compression");
Object.defineProperty(exports, "Compressor", { enumerable: true, get: function () { return compression_1.Compressor; } });
var abstract_cursor_2 = require("./cursor/abstract_cursor");
Object.defineProperty(exports, "CURSOR_FLAGS", { enumerable: true, get: function () { return abstract_cursor_2.CURSOR_FLAGS; } });
var deps_1 = require("./deps");
Object.defineProperty(exports, "AutoEncryptionLoggerLevel", { enumerable: true, get: function () { return deps_1.AutoEncryptionLoggerLevel; } });
var error_2 = require("./error");
Object.defineProperty(exports, "MongoErrorLabel", { enumerable: true, get: function () { return error_2.MongoErrorLabel; } });
var explain_1 = require("./explain");
Object.defineProperty(exports, "ExplainVerbosity", { enumerable: true, get: function () { return explain_1.ExplainVerbosity; } });
var mongo_client_2 = require("./mongo_client");
Object.defineProperty(exports, "ServerApiVersion", { enumerable: true, get: function () { return mongo_client_2.ServerApiVersion; } });
var find_and_modify_1 = require("./operations/find_and_modify");
Object.defineProperty(exports, "ReturnDocument", { enumerable: true, get: function () { return find_and_modify_1.ReturnDocument; } });
var set_profiling_level_1 = require("./operations/set_profiling_level");
Object.defineProperty(exports, "ProfilingLevel", { enumerable: true, get: function () { return set_profiling_level_1.ProfilingLevel; } });
var read_concern_1 = require("./read_concern");
Object.defineProperty(exports, "ReadConcernLevel", { enumerable: true, get: function () { return read_concern_1.ReadConcernLevel; } });
var read_preference_1 = require("./read_preference");
Object.defineProperty(exports, "ReadPreferenceMode", { enumerable: true, get: function () { return read_preference_1.ReadPreferenceMode; } });
var common_3 = require("./sdam/common");
Object.defineProperty(exports, "ServerType", { enumerable: true, get: function () { return common_3.ServerType; } });
Object.defineProperty(exports, "TopologyType", { enumerable: true, get: function () { return common_3.TopologyType; } });
// Helper classes
var read_concern_2 = require("./read_concern");
Object.defineProperty(exports, "ReadConcern", { enumerable: true, get: function () { return read_concern_2.ReadConcern; } });
var read_preference_2 = require("./read_preference");
Object.defineProperty(exports, "ReadPreference", { enumerable: true, get: function () { return read_preference_2.ReadPreference; } });
var write_concern_1 = require("./write_concern");
Object.defineProperty(exports, "WriteConcern", { enumerable: true, get: function () { return write_concern_1.WriteConcern; } });
// events
var command_monitoring_events_1 = require("./cmap/command_monitoring_events");
Object.defineProperty(exports, "CommandFailedEvent", { enumerable: true, get: function () { return command_monitoring_events_1.CommandFailedEvent; } });
Object.defineProperty(exports, "CommandStartedEvent", { enumerable: true, get: function () { return command_monitoring_events_1.CommandStartedEvent; } });
Object.defineProperty(exports, "CommandSucceededEvent", { enumerable: true, get: function () { return command_monitoring_events_1.CommandSucceededEvent; } });
var connection_pool_events_1 = require("./cmap/connection_pool_events");
Object.defineProperty(exports, "ConnectionCheckedInEvent", { enumerable: true, get: function () { return connection_pool_events_1.ConnectionCheckedInEvent; } });
Object.defineProperty(exports, "ConnectionCheckedOutEvent", { enumerable: true, get: function () { return connection_pool_events_1.ConnectionCheckedOutEvent; } });
Object.defineProperty(exports, "ConnectionCheckOutFailedEvent", { enumerable: true, get: function () { return connection_pool_events_1.ConnectionCheckOutFailedEvent; } });
Object.defineProperty(exports, "ConnectionCheckOutStartedEvent", { enumerable: true, get: function () { return connection_pool_events_1.ConnectionCheckOutStartedEvent; } });
Object.defineProperty(exports, "ConnectionClosedEvent", { enumerable: true, get: function () { return connection_pool_events_1.ConnectionClosedEvent; } });
Object.defineProperty(exports, "ConnectionCreatedEvent", { enumerable: true, get: function () { return connection_pool_events_1.ConnectionCreatedEvent; } });
Object.defineProperty(exports, "ConnectionPoolClearedEvent", { enumerable: true, get: function () { return connection_pool_events_1.ConnectionPoolClearedEvent; } });
Object.defineProperty(exports, "ConnectionPoolClosedEvent", { enumerable: true, get: function () { return connection_pool_events_1.ConnectionPoolClosedEvent; } });
Object.defineProperty(exports, "ConnectionPoolCreatedEvent", { enumerable: true, get: function () { return connection_pool_events_1.ConnectionPoolCreatedEvent; } });
Object.defineProperty(exports, "ConnectionPoolMonitoringEvent", { enumerable: true, get: function () { return connection_pool_events_1.ConnectionPoolMonitoringEvent; } });
Object.defineProperty(exports, "ConnectionPoolReadyEvent", { enumerable: true, get: function () { return connection_pool_events_1.ConnectionPoolReadyEvent; } });
Object.defineProperty(exports, "ConnectionReadyEvent", { enumerable: true, get: function () { return connection_pool_events_1.ConnectionReadyEvent; } });
var events_1 = require("./sdam/events");
Object.defineProperty(exports, "ServerClosedEvent", { enumerable: true, get: function () { return events_1.ServerClosedEvent; } });
Object.defineProperty(exports, "ServerDescriptionChangedEvent", { enumerable: true, get: function () { return events_1.ServerDescriptionChangedEvent; } });
Object.defineProperty(exports, "ServerHeartbeatFailedEvent", { enumerable: true, get: function () { return events_1.ServerHeartbeatFailedEvent; } });
Object.defineProperty(exports, "ServerHeartbeatStartedEvent", { enumerable: true, get: function () { return events_1.ServerHeartbeatStartedEvent; } });
Object.defineProperty(exports, "ServerHeartbeatSucceededEvent", { enumerable: true, get: function () { return events_1.ServerHeartbeatSucceededEvent; } });
Object.defineProperty(exports, "ServerOpeningEvent", { enumerable: true, get: function () { return events_1.ServerOpeningEvent; } });
Object.defineProperty(exports, "TopologyClosedEvent", { enumerable: true, get: function () { return events_1.TopologyClosedEvent; } });
Object.defineProperty(exports, "TopologyDescriptionChangedEvent", { enumerable: true, get: function () { return events_1.TopologyDescriptionChangedEvent; } });
Object.defineProperty(exports, "TopologyOpeningEvent", { enumerable: true, get: function () { return events_1.TopologyOpeningEvent; } });
var srv_polling_1 = require("./sdam/srv_polling");
Object.defineProperty(exports, "SrvPollingEvent", { enumerable: true, get: function () { return srv_polling_1.SrvPollingEvent; } });
//# sourceMappingURL=index.js.map

1
node_modules/mongodb/lib/index.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"index.js","sourceRoot":"","sources":["../src/index.ts"],"names":[],"mappings":";;;;;AAAA,mCAAgC;AA0E9B,sFA1EO,aAAK,OA0EP;AAzEP,4CAAsD;AAuFpD,qGAvFO,8BAAoB,OAuFP;AAtFtB,gDAA0D;AAuFxD,uGAvFO,kCAAsB,OAuFP;AAtFxB,mDAA+C;AA0E7C,6FA1EO,4BAAY,OA0EP;AAzEd,6CAA0C;AA2ExC,2FA3EO,uBAAU,OA2EP;AA1EZ,8DAA0D;AAmExD,+FAnEO,gCAAc,OAmEP;AAlEhB,oEAAgE;AAqE9D,kGArEO,sCAAiB,OAqEP;AApEnB,sDAAkD;AA0EhD,2FA1EO,wBAAU,OA0EP;AAzEZ,8EAAyE;AA6EvE,sGA7EO,+CAAqB,OA6EP;AA5EvB,sEAAiE;AA6E/D,kGA7EO,uCAAiB,OA6EP;AA5EnB,6BAA0B;AAsExB,mFAtEO,OAAE,OAsEP;AArEJ,qCAAwC;AAuEtC,6FAvEO,qBAAY,OAuEP;AAtEd,gDAA2D;AAuEzD,uGAvEO,iCAAsB,OAuEP;AAtExB,4CAA0D;AAuExD,wGAvEO,gCAAuB,OAuEP;AAtEzB,iDAA6C;AAyE3C,4FAzEO,0BAAW,OAyEP;AAxEb,+CAAkD;AA6DhD,kGA7DO,+BAAiB,OA6DP;AA5DnB,yCAA2C;AA8DzC,8FA9DO,wBAAa,OA8DP;AA5Df,cAAc;AACd,+BAA8B;AAArB,4FAAA,IAAI,OAAA;AACb,+BAegB;AAdd,8FAAA,MAAM,OAAA;AACN,kGAAA,UAAU,OAAA;AACV,kGAAA,UAAU,OAAA;AACV,gGAAA,QAAQ,OAAA;AACR,4FAAA,IAAI,OAAA;AACJ,6FAAA,KAAK,OAAA;AACL,kGAAA,UAAU,OAAA;AACV,8FAAA,MAAM,OAAA;AACN,6FAAA,KAAK,OAAA;AACL,4FAAA,IAAI,OAAA;AACJ,8FAAA,MAAM,OAAA;AACN,8FAAA,MAAM,OAAA;AACN,gGAAA,QAAQ,OAAA;AACR,iGAAA,SAAS,OAAA;AAEX,wCAA6F;AAA3C,6GAAA,mBAAmB,OAAA;AACrE,sEAAmE;AAA1D,0HAAA,kBAAkB,OAAA;AAC3B,iCAgCiB;AA/Bf,sGAAA,aAAa,OAAA;AACb,sGAAA,aAAa,OAAA;AACb,mHAAA,0BAA0B,OAAA;AAC1B,+GAAA,sBAAsB,OAAA;AACtB,gHAAA,uBAAuB,OAAA;AACvB,kHAAA,yBAAyB,OAAA;AACzB,8GAAA,qBAAqB,OAAA;AACrB,gHAAA,uBAAuB,OAAA;AACvB,yGAAA,gBAAgB,OAAA;AAChB,mGAAA,UAAU,OAAA;AACV,iHAAA,wBAAwB,OAAA;AACxB,8GAAA,qBAAqB,OAAA;AACrB,+GAAA,sBAAsB,OAAA;AACtB,kHAAA,yBAAyB,OAAA;AACzB,2GAAA,kBAAkB,OAAA;AAClB,qHAAA,4BAA4B,OAAA;AAC5B,oHAAA,2BAA2B,OAAA;AAC3B,0GAAA,iBAAiB,OAAA;AACjB,iHAAA,wBAAwB,OAAA;AACxB,+GAAA,sBAAsB,OAAA;AACtB,wGAAA,eAAe,OAAA;AACf,0GAAA,iBAAiB,OAAA;AACjB,+GAAA,sBAAsB,OAAA;AACtB,yGAAA,gBAAgB,OAAA;AAChB,kHAAA,yBAAyB,OAAA;AACzB,yGAAA,gBAAgB,OAAA;AAChB,iHAAA,wBAAwB,OAAA;AACxB,iHAAA,wBAAwB,OAAA;AACxB,8GAAA,qBAAqB,OAAA;AACrB,2HAAA,kCAAkC,OAAA;AAClC,+GAAA,sBAAsB,OAAA;AAuBxB,QAAQ;AACR,wCAA0C;AAAjC,mGAAA,SAAS,OAAA;AAClB,6CAAiE;AAAxD,qHAAA,2BAA2B,OAAA;AACpC,mDAAsD;AAA7C,0GAAA,aAAa,OAAA;AACtB,gEAA8D;AAArD,yGAAA,UAAU,OAAA;AACnB,4DAAwD;AAA/C,+GAAA,YAAY,OAAA;AACrB,+BAAmD;AAA1C,iHAAA,yBAAyB,OAAA;AAClC,iCAA0C;AAAjC,wGAAA,eAAe,OAAA;AACxB,qCAA6C;AAApC,2GAAA,gBAAgB,OAAA;AACzB,+CAAkD;AAAzC,gHAAA,gBAAgB,OAAA;AACzB,gEAA8D;AAArD,iHAAA,cAAc,OAAA;AACvB,wEAAkE;AAAzD,qHAAA,cAAc,OAAA;AACvB,+CAAkD;AAAzC,gHAAA,gBAAgB,OAAA;AACzB,qDAAuD;AAA9C,qHAAA,kBAAkB,OAAA;AAC3B,wCAAyD;AAAhD,oGAAA,UAAU,OAAA;AAAE,sGAAA,YAAY,OAAA;AAEjC,iBAAiB;AACjB,+CAA6C;AAApC,2GAAA,WAAW,OAAA;AACpB,qDAAmD;AAA1C,iHAAA,cAAc,OAAA;AACvB,iDAA+C;AAAtC,6GAAA,YAAY,OAAA;AAErB,SAAS;AACT,8EAI0C;AAHxC,+HAAA,kBAAkB,OAAA;AAClB,gIAAA,mBAAmB,OAAA;AACnB,kIAAA,qBAAqB,OAAA;AAEvB,wEAauC;AAZrC,kIAAA,wBAAwB,OAAA;AACxB,mIAAA,yBAAyB,OAAA;AACzB,uIAAA,6BAA6B,OAAA;AAC7B,wIAAA,8BAA8B,OAAA;AAC9B,+HAAA,qBAAqB,OAAA;AACrB,gIAAA,sBAAsB,OAAA;AACtB,oIAAA,0BAA0B,OAAA;AAC1B,mIAAA,yBAAyB,OAAA;AACzB,oIAAA,0BAA0B,OAAA;AAC1B,uIAAA,6BAA6B,OAAA;AAC7B,kIAAA,wBAAwB,OAAA;AACxB,8HAAA,oBAAoB,OAAA;AAEtB,wCAUuB;AATrB,2GAAA,iBAAiB,OAAA;AACjB,uHAAA,6BAA6B,OAAA;AAC7B,oHAAA,0BAA0B,OAAA;AAC1B,qHAAA,2BAA2B,OAAA;AAC3B,uHAAA,6BAA6B,OAAA;AAC7B,4GAAA,kBAAkB,OAAA;AAClB,6GAAA,mBAAmB,OAAA;AACnB,yHAAA,+BAA+B,OAAA;AAC/B,8GAAA,oBAAoB,OAAA;AAEtB,kDAAqD;AAA5C,8GAAA,eAAe,OAAA"}

292
node_modules/mongodb/lib/mongo_client.js generated vendored Normal file
View file

@ -0,0 +1,292 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.MongoClient = exports.ServerApiVersion = void 0;
const util_1 = require("util");
const bson_1 = require("./bson");
const change_stream_1 = require("./change_stream");
const connection_string_1 = require("./connection_string");
const constants_1 = require("./constants");
const db_1 = require("./db");
const error_1 = require("./error");
const mongo_logger_1 = require("./mongo_logger");
const mongo_types_1 = require("./mongo_types");
const read_preference_1 = require("./read_preference");
const server_selection_1 = require("./sdam/server_selection");
const topology_1 = require("./sdam/topology");
const sessions_1 = require("./sessions");
const utils_1 = require("./utils");
/** @public */
exports.ServerApiVersion = Object.freeze({
v1: '1'
});
/** @internal */
const kOptions = Symbol('options');
/**
* The **MongoClient** class is a class that allows for making Connections to MongoDB.
* @public
*
* @remarks
* The programmatically provided options take precedence over the URI options.
*
* @example
* ```ts
* import { MongoClient } from 'mongodb';
*
* // Enable command monitoring for debugging
* const client = new MongoClient('mongodb://localhost:27017', { monitorCommands: true });
*
* client.on('commandStarted', started => console.log(started));
* client.db().collection('pets');
* await client.insertOne({ name: 'spot', kind: 'dog' });
* ```
*/
class MongoClient extends mongo_types_1.TypedEventEmitter {
constructor(url, options) {
super();
this[kOptions] = (0, connection_string_1.parseOptions)(url, this, options);
this.mongoLogger = new mongo_logger_1.MongoLogger(this[kOptions].mongoLoggerOptions);
// eslint-disable-next-line @typescript-eslint/no-this-alias
const client = this;
// The internal state
this.s = {
url,
bsonOptions: (0, bson_1.resolveBSONOptions)(this[kOptions]),
namespace: (0, utils_1.ns)('admin'),
hasBeenClosed: false,
sessionPool: new sessions_1.ServerSessionPool(this),
activeSessions: new Set(),
get options() {
return client[kOptions];
},
get readConcern() {
return client[kOptions].readConcern;
},
get writeConcern() {
return client[kOptions].writeConcern;
},
get readPreference() {
return client[kOptions].readPreference;
},
get isMongoClient() {
return true;
}
};
}
get options() {
return Object.freeze({ ...this[kOptions] });
}
get serverApi() {
return this[kOptions].serverApi && Object.freeze({ ...this[kOptions].serverApi });
}
/**
* Intended for APM use only
* @internal
*/
get monitorCommands() {
return this[kOptions].monitorCommands;
}
set monitorCommands(value) {
this[kOptions].monitorCommands = value;
}
get autoEncrypter() {
return this[kOptions].autoEncrypter;
}
get readConcern() {
return this.s.readConcern;
}
get writeConcern() {
return this.s.writeConcern;
}
get readPreference() {
return this.s.readPreference;
}
get bsonOptions() {
return this.s.bsonOptions;
}
/**
* Connect to MongoDB using a url
*
* @see docs.mongodb.org/manual/reference/connection-string/
*/
async connect() {
if (this.topology && this.topology.isConnected()) {
return this;
}
const options = this[kOptions];
if (typeof options.srvHost === 'string') {
const hosts = await (0, connection_string_1.resolveSRVRecord)(options);
for (const [index, host] of hosts.entries()) {
options.hosts[index] = host;
}
}
const topology = new topology_1.Topology(options.hosts, options);
// Events can be emitted before initialization is complete so we have to
// save the reference to the topology on the client ASAP if the event handlers need to access it
this.topology = topology;
topology.client = this;
topology.once(topology_1.Topology.OPEN, () => this.emit('open', this));
for (const event of constants_1.MONGO_CLIENT_EVENTS) {
topology.on(event, (...args) => this.emit(event, ...args));
}
const topologyConnect = async () => {
try {
await (0, util_1.promisify)(callback => topology.connect(options, callback))();
}
catch (error) {
topology.close({ force: true });
throw error;
}
};
if (this.autoEncrypter) {
const initAutoEncrypter = (0, util_1.promisify)(callback => this.autoEncrypter?.init(callback));
await initAutoEncrypter();
await topologyConnect();
await options.encrypter.connectInternalClient();
}
else {
await topologyConnect();
}
return this;
}
/**
* Close the client and its underlying connections
*
* @param force - Force close, emitting no events
*/
async close(force = false) {
// There's no way to set hasBeenClosed back to false
Object.defineProperty(this.s, 'hasBeenClosed', {
value: true,
enumerable: true,
configurable: false,
writable: false
});
const activeSessionEnds = Array.from(this.s.activeSessions, session => session.endSession());
this.s.activeSessions.clear();
await Promise.all(activeSessionEnds);
if (this.topology == null) {
return;
}
// If we would attempt to select a server and get nothing back we short circuit
// to avoid the server selection timeout.
const selector = (0, server_selection_1.readPreferenceServerSelector)(read_preference_1.ReadPreference.primaryPreferred);
const topologyDescription = this.topology.description;
const serverDescriptions = Array.from(topologyDescription.servers.values());
const servers = selector(topologyDescription, serverDescriptions);
if (servers.length !== 0) {
const endSessions = Array.from(this.s.sessionPool.sessions, ({ id }) => id);
if (endSessions.length !== 0) {
await this.db('admin')
.command({ endSessions }, { readPreference: read_preference_1.ReadPreference.primaryPreferred, noResponse: true })
.catch(() => null); // outcome does not matter
}
}
// clear out references to old topology
const topology = this.topology;
this.topology = undefined;
await new Promise((resolve, reject) => {
topology.close({ force }, error => {
if (error)
return reject(error);
const { encrypter } = this[kOptions];
if (encrypter) {
return encrypter.close(this, force, error => {
if (error)
return reject(error);
resolve();
});
}
resolve();
});
});
}
/**
* Create a new Db instance sharing the current socket connections.
*
* @param dbName - The name of the database we want to use. If not provided, use database name from connection string.
* @param options - Optional settings for Db construction
*/
db(dbName, options) {
options = options ?? {};
// Default to db from connection string if not provided
if (!dbName) {
dbName = this.options.dbName;
}
// Copy the options and add out internal override of the not shared flag
const finalOptions = Object.assign({}, this[kOptions], options);
// Return the db object
const db = new db_1.Db(this, dbName, finalOptions);
// Return the database
return db;
}
/**
* Connect to MongoDB using a url
*
* @remarks
* The programmatically provided options take precedence over the URI options.
*
* @see https://docs.mongodb.org/manual/reference/connection-string/
*/
static async connect(url, options) {
const client = new this(url, options);
return client.connect();
}
/** Starts a new session on the server */
startSession(options) {
const session = new sessions_1.ClientSession(this, this.s.sessionPool, { explicit: true, ...options }, this[kOptions]);
this.s.activeSessions.add(session);
session.once('ended', () => {
this.s.activeSessions.delete(session);
});
return session;
}
async withSession(optionsOrOperation, callback) {
const options = {
// Always define an owner
owner: Symbol(),
// If it's an object inherit the options
...(typeof optionsOrOperation === 'object' ? optionsOrOperation : {})
};
const withSessionCallback = typeof optionsOrOperation === 'function' ? optionsOrOperation : callback;
if (withSessionCallback == null) {
throw new error_1.MongoInvalidArgumentError('Missing required callback parameter');
}
const session = this.startSession(options);
try {
await withSessionCallback(session);
}
finally {
try {
await session.endSession();
}
catch {
// We are not concerned with errors from endSession()
}
}
}
/**
* Create a new Change Stream, watching for new changes (insertions, updates,
* replacements, deletions, and invalidations) in this cluster. Will ignore all
* changes to system collections, as well as the local, admin, and config databases.
*
* @remarks
* watch() accepts two generic arguments for distinct use cases:
* - The first is to provide the schema that may be defined for all the data within the current cluster
* - The second is to override the shape of the change stream document entirely, if it is not provided the type will default to ChangeStreamDocument of the first argument
*
* @param pipeline - An array of {@link https://docs.mongodb.com/manual/reference/operator/aggregation-pipeline/|aggregation pipeline stages} through which to pass change stream documents. This allows for filtering (using $match) and manipulating the change stream documents.
* @param options - Optional settings for the command
* @typeParam TSchema - Type of the data being detected by the change stream
* @typeParam TChange - Type of the whole change stream document emitted
*/
watch(pipeline = [], options = {}) {
// Allow optionally not specifying a pipeline
if (!Array.isArray(pipeline)) {
options = pipeline;
pipeline = [];
}
return new change_stream_1.ChangeStream(this, pipeline, (0, utils_1.resolveOptions)(this, options));
}
}
exports.MongoClient = MongoClient;
//# sourceMappingURL=mongo_client.js.map

1
node_modules/mongodb/lib/mongo_client.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

113
node_modules/mongodb/lib/mongo_logger.js generated vendored Normal file
View file

@ -0,0 +1,113 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.MongoLogger = exports.MongoLoggableComponent = exports.SeverityLevel = void 0;
const stream_1 = require("stream");
const utils_1 = require("./utils");
/** @internal */
exports.SeverityLevel = Object.freeze({
EMERGENCY: 'emergency',
ALERT: 'alert',
CRITICAL: 'critical',
ERROR: 'error',
WARNING: 'warn',
NOTICE: 'notice',
INFORMATIONAL: 'info',
DEBUG: 'debug',
TRACE: 'trace',
OFF: 'off'
});
/** @internal */
exports.MongoLoggableComponent = Object.freeze({
COMMAND: 'command',
TOPOLOGY: 'topology',
SERVER_SELECTION: 'serverSelection',
CONNECTION: 'connection'
});
/**
* Parses a string as one of SeverityLevel
*
* @param s - the value to be parsed
* @returns one of SeverityLevel if value can be parsed as such, otherwise null
*/
function parseSeverityFromString(s) {
const validSeverities = Object.values(exports.SeverityLevel);
const lowerSeverity = s?.toLowerCase();
if (lowerSeverity != null && validSeverities.includes(lowerSeverity)) {
return lowerSeverity;
}
return null;
}
/**
* resolves the MONGODB_LOG_PATH and mongodbLogPath options from the environment and the
* mongo client options respectively.
*
* @returns the Writable stream to write logs to
*/
function resolveLogPath({ MONGODB_LOG_PATH }, { mongodbLogPath }) {
const isValidLogDestinationString = (destination) => ['stdout', 'stderr'].includes(destination.toLowerCase());
if (typeof mongodbLogPath === 'string' && isValidLogDestinationString(mongodbLogPath)) {
return mongodbLogPath.toLowerCase() === 'stderr' ? process.stderr : process.stdout;
}
// TODO(NODE-4813): check for minimal interface instead of instanceof Writable
if (typeof mongodbLogPath === 'object' && mongodbLogPath instanceof stream_1.Writable) {
return mongodbLogPath;
}
if (typeof MONGODB_LOG_PATH === 'string' && isValidLogDestinationString(MONGODB_LOG_PATH)) {
return MONGODB_LOG_PATH.toLowerCase() === 'stderr' ? process.stderr : process.stdout;
}
return process.stderr;
}
/** @internal */
class MongoLogger {
constructor(options) {
this.componentSeverities = options.componentSeverities;
this.maxDocumentLength = options.maxDocumentLength;
this.logDestination = options.logDestination;
}
/* eslint-disable @typescript-eslint/no-unused-vars */
/* eslint-disable @typescript-eslint/no-empty-function */
emergency(component, message) { }
alert(component, message) { }
critical(component, message) { }
error(component, message) { }
warn(component, message) { }
notice(component, message) { }
info(component, message) { }
debug(component, message) { }
trace(component, message) { }
/**
* Merges options set through environment variables and the MongoClient, preferring environment
* variables when both are set, and substituting defaults for values not set. Options set in
* constructor take precedence over both environment variables and MongoClient options.
*
* @remarks
* When parsing component severity levels, invalid values are treated as unset and replaced with
* the default severity.
*
* @param envOptions - options set for the logger from the environment
* @param clientOptions - options set for the logger in the MongoClient options
* @returns a MongoLoggerOptions object to be used when instantiating a new MongoLogger
*/
static resolveOptions(envOptions, clientOptions) {
// client options take precedence over env options
const combinedOptions = {
...envOptions,
...clientOptions,
mongodbLogPath: resolveLogPath(envOptions, clientOptions)
};
const defaultSeverity = parseSeverityFromString(combinedOptions.MONGODB_LOG_ALL) ?? exports.SeverityLevel.OFF;
return {
componentSeverities: {
command: parseSeverityFromString(combinedOptions.MONGODB_LOG_COMMAND) ?? defaultSeverity,
topology: parseSeverityFromString(combinedOptions.MONGODB_LOG_TOPOLOGY) ?? defaultSeverity,
serverSelection: parseSeverityFromString(combinedOptions.MONGODB_LOG_SERVER_SELECTION) ?? defaultSeverity,
connection: parseSeverityFromString(combinedOptions.MONGODB_LOG_CONNECTION) ?? defaultSeverity,
default: defaultSeverity
},
maxDocumentLength: (0, utils_1.parseUnsignedInteger)(combinedOptions.MONGODB_LOG_MAX_DOCUMENT_LENGTH) ?? 1000,
logDestination: combinedOptions.mongodbLogPath
};
}
}
exports.MongoLogger = MongoLogger;
//# sourceMappingURL=mongo_logger.js.map

Some files were not shown because too many files have changed in this diff Show more