progress on migrating to heex templates and font-icons

This commit is contained in:
Adam Piontek 2022-08-13 07:32:36 -04:00
parent d43daafdb7
commit 3eff955672
21793 changed files with 2161968 additions and 16895 deletions

796
assets_old/node_modules/cacache/CHANGELOG.md generated vendored Normal file
View file

@ -0,0 +1,796 @@
# Changelog
All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines.
### [15.0.6](https://github.com/npm/cacache/compare/v15.0.5...v15.0.6) (2021-03-22)
### [15.0.5](https://github.com/npm/cacache/compare/v15.0.4...v15.0.5) (2020-07-11)
### [15.0.4](https://github.com/npm/cacache/compare/v15.0.3...v15.0.4) (2020-06-03)
### Bug Fixes
* replace move-file dep with @npmcli/move-file ([bf88af0](https://github.com/npm/cacache/commit/bf88af04e50cca9b54041151139ffc1fd415e2dc)), closes [#37](https://github.com/npm/cacache/issues/37)
### [15.0.3](https://github.com/npm/cacache/compare/v15.0.2...v15.0.3) (2020-04-28)
### Bug Fixes
* actually remove move-concurrently dep ([29e6eec](https://github.com/npm/cacache/commit/29e6eec9fee73444ee09daf1c1be06ddd5fe57f6))
### [15.0.2](https://github.com/npm/cacache/compare/v15.0.1...v15.0.2) (2020-04-28)
### Bug Fixes
* tacks should be a dev dependency ([93ec158](https://github.com/npm/cacache/commit/93ec15852f0fdf1753ea7f75b4b8926daf8a7565))
## [15.0.1](https://github.com/npm/cacache/compare/v15.0.0...v15.0.1) (2020-04-27)
* **deps:** Use move-file instead of move-file-concurrently. ([92b125](https://github.com/npm/cacache/commit/92b1251a11b9848878b6c0d101b18bd8845acaa6))
## [15.0.0](https://github.com/npm/cacache/compare/v14.0.0...v15.0.0) (2020-02-18)
### ⚠ BREAKING CHANGES
* drop figgy-pudding and use canonical option names.
### Features
* remove figgy-pudding ([57d11bc](https://github.com/npm/cacache/commit/57d11bce34f979247d1057d258acc204c4944491))
## [14.0.0](https://github.com/npm/cacache/compare/v13.0.1...v14.0.0) (2020-01-28)
### ⚠ BREAKING CHANGES
* **deps:** bumps engines to >= 10
* **deps:** tar v6 and mkdirp v1 ([5a66e7a](https://github.com/npm/cacache/commit/5a66e7a))
### [13.0.1](https://github.com/npm/cacache/compare/v13.0.0...v13.0.1) (2019-09-30)
### Bug Fixes
* **fix-owner:** chownr.sync quits on non-root uid ([08801be](https://github.com/npm/cacache/commit/08801be))
## [13.0.0](https://github.com/npm/cacache/compare/v12.0.3...v13.0.0) (2019-09-25)
### ⚠ BREAKING CHANGES
* This subtly changes the streaming interface of
everything in cacache that streams, which is, well, everything in
cacache. Most users will probably not notice, but any code that
depended on stream behavior always being deferred until next tick will
need to adjust.
The mississippi methods 'to', 'from', 'through', and so on, have been
replaced with their Minipass counterparts, and streaming interaction
with the file system is done via fs-minipass.
The following modules are of interest here:
- [minipass](http://npm.im/minipass) The core stream library.
- [fs-minipass](http://npm.im/fs-minipass) Note that the 'WriteStream'
class from fs-minipass is _not_ a Minipass stream, but rather a plain
old EventEmitter that duck types as a Writable.
- [minipass-collect](http://npm.im/minipass-collect) Gather up all the
data from a stream. Cacache only uses Collect.PassThrough, which is a
basic Minipass passthrough stream which emits a 'collect' event with
the completed data just before the 'end' event.
- [minipass-pipeline](http://npm.im/minipass-pipeline) Connect one or
more streams into a pipe chain. Errors anywhere in the pipeline are
proxied down the chain and then up to the Pipeline object itself.
Writes go into the head, reads go to the tail. Used in place of
pump() and pumpify().
- [minipass-flush](http://npm.im/minipass-flush) A Minipass passthrough
stream that defers its 'end' event until after a flush() method has
completed (either calling the supplied callback, or returning a
promise.) Use in place of flush-write-stream (aka mississippi.to).
Streams from through2, concat-stream, and the behavior provided by
end-of-stream are all implemented in Minipass itself.
Features of interest to cacache, which make Minipass a particularly good
fit:
- All of the 'endish' events are normalized, so we can just listen on
'end' and know that finish, prefinish, and close will be handled as
well.
- Minipass doesn't waste time [containing
zalgo](https://blog.izs.me/2013/08/designing-apis-for-asynchrony).
- Minipass has built-in support for promises that indicate the end or
error: stream.promise(), stream.collect(), and stream.concat().
- With reliable and consistent timing guarantees, much less
error-checking logic is required. We can be more confident that an
error is being thrown or emitted in the correct place, rather than in
a callback which is deferred, resulting in a hung promise or
uncaughtException.
The biggest downside of Minipass is that it lacks some of the internal
characteristics of node-core streams, which many community modules use
to identify streams. They have no _writableState or _readableState
objects, or _read or _write methods. As a result, the is-stream module
(at least, at the time of this commit) doesn't recognize Minipass
streams as readable or writable streams.
All in all, the changes required of downstream users should be minimal,
but are unlikely to be zero. Hence the semver major change.
### Features
* replace all streams with Minipass streams ([f4c0962](https://github.com/npm/cacache/commit/f4c0962))
* **deps:** Add minipass and minipass-pipeline ([a6545a9](https://github.com/npm/cacache/commit/a6545a9))
* **promise:** converted .resolve to native promise, converted .map and .reduce to native ([220c56d](https://github.com/npm/cacache/commit/220c56d))
* **promise:** individually promisifing functions as needed ([74b939e](https://github.com/npm/cacache/commit/74b939e))
* **promise:** moved .reject from bluebird to native promise ([1d56da1](https://github.com/npm/cacache/commit/1d56da1))
* **promise:** removed .fromNode, removed .join ([9c457a0](https://github.com/npm/cacache/commit/9c457a0))
* **promise:** removed .map, replaced with p-map. removed .try ([cc3ee05](https://github.com/npm/cacache/commit/cc3ee05))
* **promise:** removed .tap ([0260f12](https://github.com/npm/cacache/commit/0260f12))
* **promise:** removed .using/.disposer ([5d832f3](https://github.com/npm/cacache/commit/5d832f3))
* **promise:** removed bluebird ([c21298c](https://github.com/npm/cacache/commit/c21298c))
* **promise:** removed bluebird specific .catch calls ([28aeeac](https://github.com/npm/cacache/commit/28aeeac))
* **promise:** replaced .reduce and .mapSeries ([478f5cb](https://github.com/npm/cacache/commit/478f5cb))
### [12.0.3](https://github.com/npm/cacache/compare/v12.0.2...v12.0.3) (2019-08-19)
### Bug Fixes
* do not chown if not running as root ([2d80af9](https://github.com/npm/cacache/commit/2d80af9))
### [12.0.2](https://github.com/npm/cacache/compare/v12.0.1...v12.0.2) (2019-07-19)
### [12.0.1](https://github.com/npm/cacache/compare/v12.0.0...v12.0.1) (2019-07-19)
* **deps** Abstracted out `lib/util/infer-owner.js` to
[@npmcli/infer-owner](https://www.npmjs.com/package/@npmcli/infer-owner)
so that it could be more easily used in other parts of the npm CLI.
## [12.0.0](https://github.com/npm/cacache/compare/v11.3.3...v12.0.0) (2019-07-15)
### Features
* infer uid/gid instead of accepting as options ([ac84d14](https://github.com/npm/cacache/commit/ac84d14))
* **i18n:** add another error message ([676cb32](https://github.com/npm/cacache/commit/676cb32))
### BREAKING CHANGES
* the uid gid options are no longer respected or
necessary. As of this change, cacache will always match the cache
contents to the ownership of the cache directory (or its parent
directory), regardless of what the caller passes in.
Reasoning:
The number one reason to use a uid or gid option was to keep root-owned
files from causing problems in the cache. In npm's case, this meant
that CLI's ./lib/command.js had to work out the appropriate uid and gid,
then pass it to the libnpmcommand module, which had to in turn pass the
uid and gid to npm-registry-fetch, which then passed it to
make-fetch-happen, which passed it to cacache. (For package fetching,
pacote would be in that mix as well.)
Added to that, `cacache.rm()` will actually _write_ a file into the
cache index, but has no way to accept an option so that its call to
entry-index.js will write the index with the appropriate uid/gid.
Little ownership bugs were all over the place, and tricky to trace
through. (Why should make-fetch-happen even care about accepting or
passing uids and gids? It's an http library.)
This change allows us to keep the cache from having mixed ownership in
any situation.
Of course, this _does_ mean that if you have a root-owned but
user-writable folder (for example, `/tmp`), then the cache will try to
chown everything to root.
The solution is for the user to create a folder, make it user-owned, and
use that, rather than relying on cacache to create the root cache folder.
If we decide to restore the uid/gid opts, and use ownership inference
only when uid/gid are unset, then take care to also make rm take an
option object, and pass it through to entry-index.js.
### [11.3.3](https://github.com/npm/cacache/compare/v11.3.2...v11.3.3) (2019-06-17)
### Bug Fixes
* **audit:** npm audit fix ([200a6d5](https://github.com/npm/cacache/commit/200a6d5))
* **config:** Add ssri config 'error' option ([#146](https://github.com/npm/cacache/issues/146)) ([47de8f5](https://github.com/npm/cacache/commit/47de8f5))
* **deps:** npm audit fix ([481a7dc](https://github.com/npm/cacache/commit/481a7dc))
* **standard:** standard --fix ([7799149](https://github.com/npm/cacache/commit/7799149))
* **write:** avoid another cb never called situation ([5156561](https://github.com/npm/cacache/commit/5156561))
<a name="11.3.2"></a>
## [11.3.2](https://github.com/npm/cacache/compare/v11.3.1...v11.3.2) (2018-12-21)
### Bug Fixes
* **get:** make sure to handle errors in the .then ([b10bcd0](https://github.com/npm/cacache/commit/b10bcd0))
<a name="11.3.1"></a>
## [11.3.1](https://github.com/npm/cacache/compare/v11.3.0...v11.3.1) (2018-11-05)
### Bug Fixes
* **get:** export hasContent.sync properly ([d76c920](https://github.com/npm/cacache/commit/d76c920))
<a name="11.3.0"></a>
# [11.3.0](https://github.com/npm/cacache/compare/v11.2.0...v11.3.0) (2018-11-05)
### Features
* **get:** add sync API for reading ([db1e094](https://github.com/npm/cacache/commit/db1e094))
<a name="11.2.0"></a>
# [11.2.0](https://github.com/npm/cacache/compare/v11.1.0...v11.2.0) (2018-08-08)
### Features
* **read:** add sync support to other internal read.js fns ([fe638b6](https://github.com/npm/cacache/commit/fe638b6))
<a name="11.1.0"></a>
# [11.1.0](https://github.com/npm/cacache/compare/v11.0.3...v11.1.0) (2018-08-01)
### Features
* **read:** add sync support for low-level content read ([b43af83](https://github.com/npm/cacache/commit/b43af83))
<a name="11.0.3"></a>
## [11.0.3](https://github.com/npm/cacache/compare/v11.0.2...v11.0.3) (2018-08-01)
### Bug Fixes
* **config:** add ssri config options ([#136](https://github.com/npm/cacache/issues/136)) ([10d5d9a](https://github.com/npm/cacache/commit/10d5d9a))
* **perf:** refactor content.read to avoid lstats ([c5ac10e](https://github.com/npm/cacache/commit/c5ac10e))
* **test:** oops when removing safe-buffer ([1950490](https://github.com/npm/cacache/commit/1950490))
<a name="11.0.2"></a>
## [11.0.2](https://github.com/npm/cacache/compare/v11.0.1...v11.0.2) (2018-05-07)
### Bug Fixes
* **verify:** size param no longer lost in a verify ([#131](https://github.com/npm/cacache/issues/131)) ([c614a19](https://github.com/npm/cacache/commit/c614a19)), closes [#130](https://github.com/npm/cacache/issues/130)
<a name="11.0.1"></a>
## [11.0.1](https://github.com/npm/cacache/compare/v11.0.0...v11.0.1) (2018-04-10)
<a name="11.0.0"></a>
# [11.0.0](https://github.com/npm/cacache/compare/v10.0.4...v11.0.0) (2018-04-09)
### Features
* **opts:** use figgy-pudding for opts ([#128](https://github.com/npm/cacache/issues/128)) ([33d4eed](https://github.com/npm/cacache/commit/33d4eed))
### meta
* drop support for node@4 ([529f347](https://github.com/npm/cacache/commit/529f347))
### BREAKING CHANGES
* node@4 is no longer supported
<a name="10.0.4"></a>
## [10.0.4](https://github.com/npm/cacache/compare/v10.0.3...v10.0.4) (2018-02-16)
<a name="10.0.3"></a>
## [10.0.3](https://github.com/npm/cacache/compare/v10.0.2...v10.0.3) (2018-02-16)
### Bug Fixes
* **content:** rethrow aggregate errors as ENOENT ([fa918f5](https://github.com/npm/cacache/commit/fa918f5))
<a name="10.0.2"></a>
## [10.0.2](https://github.com/npm/cacache/compare/v10.0.1...v10.0.2) (2018-01-07)
### Bug Fixes
* **ls:** deleted entries could cause a premature stream EOF ([347dc36](https://github.com/npm/cacache/commit/347dc36))
<a name="10.0.1"></a>
## [10.0.1](https://github.com/npm/cacache/compare/v10.0.0...v10.0.1) (2017-11-15)
### Bug Fixes
* **move-file:** actually use the fallback to `move-concurrently` (#110) ([073fbe1](https://github.com/npm/cacache/commit/073fbe1))
<a name="10.0.0"></a>
# [10.0.0](https://github.com/npm/cacache/compare/v9.3.0...v10.0.0) (2017-10-23)
### Features
* **license:** relicense to ISC (#111) ([fdbb4e5](https://github.com/npm/cacache/commit/fdbb4e5))
### Performance Improvements
* more copyFile benchmarks ([63787bb](https://github.com/npm/cacache/commit/63787bb))
### BREAKING CHANGES
* **license:** the license has been changed from CC0-1.0 to ISC.
<a name="9.3.0"></a>
# [9.3.0](https://github.com/npm/cacache/compare/v9.2.9...v9.3.0) (2017-10-07)
### Features
* **copy:** added cacache.get.copy api for fast copies (#107) ([067b5f6](https://github.com/npm/cacache/commit/067b5f6))
<a name="9.2.9"></a>
## [9.2.9](https://github.com/npm/cacache/compare/v9.2.8...v9.2.9) (2017-06-17)
<a name="9.2.8"></a>
## [9.2.8](https://github.com/npm/cacache/compare/v9.2.7...v9.2.8) (2017-06-05)
### Bug Fixes
* **ssri:** bump ssri for bugfix ([c3232ea](https://github.com/npm/cacache/commit/c3232ea))
<a name="9.2.7"></a>
## [9.2.7](https://github.com/npm/cacache/compare/v9.2.6...v9.2.7) (2017-06-05)
### Bug Fixes
* **content:** make verified content completely read-only (#96) ([4131196](https://github.com/npm/cacache/commit/4131196))
<a name="9.2.6"></a>
## [9.2.6](https://github.com/npm/cacache/compare/v9.2.5...v9.2.6) (2017-05-31)
### Bug Fixes
* **node:** update ssri to prevent old node 4 crash ([5209ffe](https://github.com/npm/cacache/commit/5209ffe))
<a name="9.2.5"></a>
## [9.2.5](https://github.com/npm/cacache/compare/v9.2.4...v9.2.5) (2017-05-25)
### Bug Fixes
* **deps:** fix lockfile issues and bump ssri ([84e1d7e](https://github.com/npm/cacache/commit/84e1d7e))
<a name="9.2.4"></a>
## [9.2.4](https://github.com/npm/cacache/compare/v9.2.3...v9.2.4) (2017-05-24)
### Bug Fixes
* **deps:** bumping deps ([bbccb12](https://github.com/npm/cacache/commit/bbccb12))
<a name="9.2.3"></a>
## [9.2.3](https://github.com/npm/cacache/compare/v9.2.2...v9.2.3) (2017-05-24)
### Bug Fixes
* **rm:** stop crashing if content is missing on rm ([ac90bc0](https://github.com/npm/cacache/commit/ac90bc0))
<a name="9.2.2"></a>
## [9.2.2](https://github.com/npm/cacache/compare/v9.2.1...v9.2.2) (2017-05-14)
### Bug Fixes
* **i18n:** lets pretend this didn't happen ([519b4ee](https://github.com/npm/cacache/commit/519b4ee))
<a name="9.2.1"></a>
## [9.2.1](https://github.com/npm/cacache/compare/v9.2.0...v9.2.1) (2017-05-14)
### Bug Fixes
* **docs:** fixing translation messup ([bb9e4f9](https://github.com/npm/cacache/commit/bb9e4f9))
<a name="9.2.0"></a>
# [9.2.0](https://github.com/npm/cacache/compare/v9.1.0...v9.2.0) (2017-05-14)
### Features
* **i18n:** add Spanish translation for API ([531f9a4](https://github.com/npm/cacache/commit/531f9a4))
<a name="9.1.0"></a>
# [9.1.0](https://github.com/npm/cacache/compare/v9.0.0...v9.1.0) (2017-05-14)
### Features
* **i18n:** Add Spanish translation and i18n setup (#91) ([323b90c](https://github.com/npm/cacache/commit/323b90c))
<a name="9.0.0"></a>
# [9.0.0](https://github.com/npm/cacache/compare/v8.0.0...v9.0.0) (2017-04-28)
### Bug Fixes
* **memoization:** actually use the LRU ([0e55dc9](https://github.com/npm/cacache/commit/0e55dc9))
### Features
* **memoization:** memoizers can be injected through opts.memoize (#90) ([e5614c7](https://github.com/npm/cacache/commit/e5614c7))
### BREAKING CHANGES
* **memoization:** If you were passing an object to opts.memoize, it will now be used as an injected memoization object. If you were only passing booleans and other non-objects through that option, no changes are needed.
<a name="8.0.0"></a>
# [8.0.0](https://github.com/npm/cacache/compare/v7.1.0...v8.0.0) (2017-04-22)
### Features
* **read:** change hasContent to return {sri, size} (#88) ([bad6c49](https://github.com/npm/cacache/commit/bad6c49)), closes [#87](https://github.com/npm/cacache/issues/87)
### BREAKING CHANGES
* **read:** hasContent now returns an object with `{sri, size}` instead of `sri`. Use `result.sri` anywhere that needed the old return value.
<a name="7.1.0"></a>
# [7.1.0](https://github.com/npm/cacache/compare/v7.0.5...v7.1.0) (2017-04-20)
### Features
* **size:** handle content size info (#49) ([91230af](https://github.com/npm/cacache/commit/91230af))
<a name="7.0.5"></a>
## [7.0.5](https://github.com/npm/cacache/compare/v7.0.4...v7.0.5) (2017-04-18)
### Bug Fixes
* **integrity:** new ssri with fixed integrity stream ([6d13e8e](https://github.com/npm/cacache/commit/6d13e8e))
* **write:** wrap stuff in promises to improve errors ([3624fc5](https://github.com/npm/cacache/commit/3624fc5))
<a name="7.0.4"></a>
## [7.0.4](https://github.com/npm/cacache/compare/v7.0.3...v7.0.4) (2017-04-15)
### Bug Fixes
* **fix-owner:** throw away ENOENTs on chownr ([d49bbcd](https://github.com/npm/cacache/commit/d49bbcd))
<a name="7.0.3"></a>
## [7.0.3](https://github.com/npm/cacache/compare/v7.0.2...v7.0.3) (2017-04-05)
### Bug Fixes
* **read:** fixing error message for integrity verification failures ([9d4f0a5](https://github.com/npm/cacache/commit/9d4f0a5))
<a name="7.0.2"></a>
## [7.0.2](https://github.com/npm/cacache/compare/v7.0.1...v7.0.2) (2017-04-03)
### Bug Fixes
* **integrity:** use EINTEGRITY error code and update ssri ([8dc2e62](https://github.com/npm/cacache/commit/8dc2e62))
<a name="7.0.1"></a>
## [7.0.1](https://github.com/npm/cacache/compare/v7.0.0...v7.0.1) (2017-04-03)
### Bug Fixes
* **docs:** fix header name conflict in readme ([afcd456](https://github.com/npm/cacache/commit/afcd456))
<a name="7.0.0"></a>
# [7.0.0](https://github.com/npm/cacache/compare/v6.3.0...v7.0.0) (2017-04-03)
### Bug Fixes
* **test:** fix content.write tests when running in docker ([d2e9b6a](https://github.com/npm/cacache/commit/d2e9b6a))
### Features
* **integrity:** subresource integrity support (#78) ([b1e731f](https://github.com/npm/cacache/commit/b1e731f))
### BREAKING CHANGES
* **integrity:** The entire API has been overhauled to use SRI hashes instead of digest/hashAlgorithm pairs. SRI hashes follow the Subresource Integrity standard and support strings and objects compatible with [`ssri`](https://npm.im/ssri).
* This change bumps the index version, which will invalidate all previous index entries. Content entries will remain intact, and existing caches will automatically reuse any content from before this breaking change.
* `cacache.get.info()`, `cacache.ls()`, and `cacache.ls.stream()` will now return objects that looks like this:
```
{
key: String,
integrity: '<algorithm>-<base64hash>',
path: ContentPath,
time: Date<ms>,
metadata: Any
}
```
* `opts.digest` and `opts.hashAlgorithm` are obsolete for any API calls that used them.
* Anywhere `opts.digest` was accepted, `opts.integrity` is now an option. Any valid SRI hash is accepted here -- multiple hash entries will be resolved according to the standard: first, the "strongest" hash algorithm will be picked, and then each of the entries for that algorithm will be matched against the content. Content will be validated if *any* of the entries match (so, a single integrity string can be used for multiple "versions" of the same document/data).
* `put.byDigest()`, `put.stream.byDigest`, `get.byDigest()` and `get.stream.byDigest()` now expect an SRI instead of a `digest` + `opts.hashAlgorithm` pairing.
* `get.hasContent()` now expects an integrity hash instead of a digest. If content exists, it will return the specific single integrity hash that was found in the cache.
* `verify()` has learned to handle integrity-based caches, and forgotten how to handle old-style cache indices due to the format change.
* `cacache.rm.content()` now expects an integrity hash instead of a hex digest.
<a name="6.3.0"></a>
# [6.3.0](https://github.com/npm/cacache/compare/v6.2.0...v6.3.0) (2017-04-01)
### Bug Fixes
* **fixOwner:** ignore EEXIST race condition from mkdirp ([4670e9b](https://github.com/npm/cacache/commit/4670e9b))
* **index:** ignore index removal races when inserting ([b9d2fa2](https://github.com/npm/cacache/commit/b9d2fa2))
* **memo:** use lru-cache for better mem management (#75) ([d8ac5aa](https://github.com/npm/cacache/commit/d8ac5aa))
### Features
* **dependencies:** Switch to move-concurrently (#77) ([dc6482d](https://github.com/npm/cacache/commit/dc6482d))
<a name="6.2.0"></a>
# [6.2.0](https://github.com/npm/cacache/compare/v6.1.2...v6.2.0) (2017-03-15)
### Bug Fixes
* **index:** additional bucket entry verification with checksum (#72) ([f8e0f25](https://github.com/npm/cacache/commit/f8e0f25))
* **verify:** return fixOwner.chownr promise ([6818521](https://github.com/npm/cacache/commit/6818521))
### Features
* **tmp:** safe tmp dir creation/management util (#73) ([c42da71](https://github.com/npm/cacache/commit/c42da71))
<a name="6.1.2"></a>
## [6.1.2](https://github.com/npm/cacache/compare/v6.1.1...v6.1.2) (2017-03-13)
### Bug Fixes
* **index:** set default hashAlgorithm ([d6eb2f0](https://github.com/npm/cacache/commit/d6eb2f0))
<a name="6.1.1"></a>
## [6.1.1](https://github.com/npm/cacache/compare/v6.1.0...v6.1.1) (2017-03-13)
### Bug Fixes
* **coverage:** bumping coverage for verify (#71) ([0b7faf6](https://github.com/npm/cacache/commit/0b7faf6))
* **deps:** glob should have been a regular dep :< ([0640bc4](https://github.com/npm/cacache/commit/0640bc4))
<a name="6.1.0"></a>
# [6.1.0](https://github.com/npm/cacache/compare/v6.0.2...v6.1.0) (2017-03-12)
### Bug Fixes
* **coverage:** more coverage for content reads (#70) ([ef4f70a](https://github.com/npm/cacache/commit/ef4f70a))
* **tests:** use safe-buffer because omfg (#69) ([6ab8132](https://github.com/npm/cacache/commit/6ab8132))
### Features
* **rm:** limited rm.all and fixed bugs (#66) ([d5d25ba](https://github.com/npm/cacache/commit/d5d25ba)), closes [#66](https://github.com/npm/cacache/issues/66)
* **verify:** tested, working cache verifier/gc (#68) ([45ad77a](https://github.com/npm/cacache/commit/45ad77a))
<a name="6.0.2"></a>
## [6.0.2](https://github.com/npm/cacache/compare/v6.0.1...v6.0.2) (2017-03-11)
### Bug Fixes
* **index:** segment cache items with another subbucket (#64) ([c3644e5](https://github.com/npm/cacache/commit/c3644e5))
<a name="6.0.1"></a>
## [6.0.1](https://github.com/npm/cacache/compare/v6.0.0...v6.0.1) (2017-03-05)
### Bug Fixes
* **docs:** Missed spots in README ([8ffb7fa](https://github.com/npm/cacache/commit/8ffb7fa))
<a name="6.0.0"></a>
# [6.0.0](https://github.com/npm/cacache/compare/v5.0.3...v6.0.0) (2017-03-05)
### Bug Fixes
* **api:** keep memo cache mostly-internal ([2f72d0a](https://github.com/npm/cacache/commit/2f72d0a))
* **content:** use the rest of the string, not the whole string ([fa8f3c3](https://github.com/npm/cacache/commit/fa8f3c3))
* **deps:** removed `format-number@2.0.2` ([1187791](https://github.com/npm/cacache/commit/1187791))
* **deps:** removed inflight@1.0.6 ([0d1819c](https://github.com/npm/cacache/commit/0d1819c))
* **deps:** rimraf@2.6.1 ([9efab6b](https://github.com/npm/cacache/commit/9efab6b))
* **deps:** standard@9.0.0 ([4202cba](https://github.com/npm/cacache/commit/4202cba))
* **deps:** tap@10.3.0 ([aa03088](https://github.com/npm/cacache/commit/aa03088))
* **deps:** weallcontribute@1.0.8 ([ad4f4dc](https://github.com/npm/cacache/commit/ad4f4dc))
* **docs:** add security note to hashKey ([03f81ba](https://github.com/npm/cacache/commit/03f81ba))
* **hashes:** change default hashAlgorithm to sha512 ([ea00ba6](https://github.com/npm/cacache/commit/ea00ba6))
* **hashes:** missed a spot for hashAlgorithm defaults ([45997d8](https://github.com/npm/cacache/commit/45997d8))
* **index:** add length header before JSON for verification ([fb8cb4d](https://github.com/npm/cacache/commit/fb8cb4d))
* **index:** change index filenames to sha1s of keys ([bbc5fca](https://github.com/npm/cacache/commit/bbc5fca))
* **index:** who cares about race conditions anyway ([b1d3888](https://github.com/npm/cacache/commit/b1d3888))
* **perf:** bulk-read get+read for massive speed ([d26cdf9](https://github.com/npm/cacache/commit/d26cdf9))
* **perf:** use bulk file reads for index reads ([79a8891](https://github.com/npm/cacache/commit/79a8891))
* **put-stream:** remove tmp file on stream insert error ([65f6632](https://github.com/npm/cacache/commit/65f6632))
* **put-stream:** robustified and predictibilized ([daf9e08](https://github.com/npm/cacache/commit/daf9e08))
* **put-stream:** use new promise API for moves ([1d36013](https://github.com/npm/cacache/commit/1d36013))
* **readme:** updated to reflect new default hashAlgo ([c60a2fa](https://github.com/npm/cacache/commit/c60a2fa))
* **verify:** tiny typo fix ([db22d05](https://github.com/npm/cacache/commit/db22d05))
### Features
* **api:** converted external api ([7bf032f](https://github.com/npm/cacache/commit/7bf032f))
* **cacache:** exported clearMemoized() utility ([8d2c5b6](https://github.com/npm/cacache/commit/8d2c5b6))
* **cache:** add versioning to content and index ([31bc549](https://github.com/npm/cacache/commit/31bc549))
* **content:** collate content files into subdirs ([c094d9f](https://github.com/npm/cacache/commit/c094d9f))
* **deps:** `@npmcorp/move@1.0.0` ([bdd00bf](https://github.com/npm/cacache/commit/bdd00bf))
* **deps:** `bluebird@3.4.7` ([3a17aff](https://github.com/npm/cacache/commit/3a17aff))
* **deps:** `promise-inflight@1.0.1` ([a004fe6](https://github.com/npm/cacache/commit/a004fe6))
* **get:** added memoization support for get ([c77d794](https://github.com/npm/cacache/commit/c77d794))
* **get:** export hasContent ([2956ec3](https://github.com/npm/cacache/commit/2956ec3))
* **index:** add hashAlgorithm and format insert ret val ([b639746](https://github.com/npm/cacache/commit/b639746))
* **index:** collate index files into subdirs ([e8402a5](https://github.com/npm/cacache/commit/e8402a5))
* **index:** promisify entry index ([cda3335](https://github.com/npm/cacache/commit/cda3335))
* **memo:** added memoization lib ([da07b92](https://github.com/npm/cacache/commit/da07b92))
* **memo:** export memoization api ([954b1b3](https://github.com/npm/cacache/commit/954b1b3))
* **move-file:** add move fallback for weird errors ([5cf4616](https://github.com/npm/cacache/commit/5cf4616))
* **perf:** bulk content write api ([51b536e](https://github.com/npm/cacache/commit/51b536e))
* **put:** added memoization support to put ([b613a70](https://github.com/npm/cacache/commit/b613a70))
* **read:** switched to promises ([a869362](https://github.com/npm/cacache/commit/a869362))
* **rm:** added memoization support to rm ([4205cf0](https://github.com/npm/cacache/commit/4205cf0))
* **rm:** switched to promises ([a000d24](https://github.com/npm/cacache/commit/a000d24))
* **util:** promise-inflight ownership fix requests ([9517cd7](https://github.com/npm/cacache/commit/9517cd7))
* **util:** use promises for api ([ae204bb](https://github.com/npm/cacache/commit/ae204bb))
* **verify:** converted to Promises ([f0b3974](https://github.com/npm/cacache/commit/f0b3974))
### BREAKING CHANGES
* cache: index/content directories are now versioned. Previous caches are no longer compatible and cannot be migrated.
* util: fix-owner now uses Promises instead of callbacks
* index: Previously-generated index entries are no longer compatible and the index must be regenerated.
* index: The index format has changed and previous caches are no longer compatible. Existing caches will need to be regenerated.
* hashes: Default hashAlgorithm changed from sha1 to sha512. If you
rely on the prior setting, pass `opts.hashAlgorithm` in explicitly.
* content: Previously-generated content directories are no longer compatible
and must be regenerated.
* verify: API is now promise-based
* read: Switches to a Promise-based API and removes callback stuff
* rm: Switches to a Promise-based API and removes callback stuff
* index: this changes the API to work off promises instead of callbacks
* api: this means we are going all in on promises now

16
assets_old/node_modules/cacache/LICENSE.md generated vendored Normal file
View file

@ -0,0 +1,16 @@
ISC License
Copyright (c) npm, Inc.
Permission to use, copy, modify, and/or distribute this software for
any purpose with or without fee is hereby granted, provided that the
above copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE COPYRIGHT HOLDER DISCLAIMS
ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE
COPYRIGHT HOLDER BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE
OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE
USE OR PERFORMANCE OF THIS SOFTWARE.

669
assets_old/node_modules/cacache/README.md generated vendored Normal file
View file

@ -0,0 +1,669 @@
# cacache [![npm version](https://img.shields.io/npm/v/cacache.svg)](https://npm.im/cacache) [![license](https://img.shields.io/npm/l/cacache.svg)](https://npm.im/cacache) [![Travis](https://img.shields.io/travis/npm/cacache.svg)](https://travis-ci.org/npm/cacache) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/npm/cacache?svg=true)](https://ci.appveyor.com/project/npm/cacache) [![Coverage Status](https://coveralls.io/repos/github/npm/cacache/badge.svg?branch=latest)](https://coveralls.io/github/npm/cacache?branch=latest)
[`cacache`](https://github.com/npm/cacache) is a Node.js library for managing
local key and content address caches. It's really fast, really good at
concurrency, and it will never give you corrupted data, even if cache files
get corrupted or manipulated.
On systems that support user and group settings on files, cacache will
match the `uid` and `gid` values to the folder where the cache lives, even
when running as `root`.
It was written to be used as [npm](https://npm.im)'s local cache, but can
just as easily be used on its own.
## Install
`$ npm install --save cacache`
## Table of Contents
* [Example](#example)
* [Features](#features)
* [Contributing](#contributing)
* [API](#api)
* [Using localized APIs](#localized-api)
* Reading
* [`ls`](#ls)
* [`ls.stream`](#ls-stream)
* [`get`](#get-data)
* [`get.stream`](#get-stream)
* [`get.info`](#get-info)
* [`get.hasContent`](#get-hasContent)
* Writing
* [`put`](#put-data)
* [`put.stream`](#put-stream)
* [`rm.all`](#rm-all)
* [`rm.entry`](#rm-entry)
* [`rm.content`](#rm-content)
* Utilities
* [`clearMemoized`](#clear-memoized)
* [`tmp.mkdir`](#tmp-mkdir)
* [`tmp.withTmp`](#with-tmp)
* Integrity
* [Subresource Integrity](#integrity)
* [`verify`](#verify)
* [`verify.lastRun`](#verify-last-run)
### Example
```javascript
const cacache = require('cacache')
const fs = require('fs')
const tarball = '/path/to/mytar.tgz'
const cachePath = '/tmp/my-toy-cache'
const key = 'my-unique-key-1234'
// Cache it! Use `cachePath` as the root of the content cache
cacache.put(cachePath, key, '10293801983029384').then(integrity => {
console.log(`Saved content to ${cachePath}.`)
})
const destination = '/tmp/mytar.tgz'
// Copy the contents out of the cache and into their destination!
// But this time, use stream instead!
cacache.get.stream(
cachePath, key
).pipe(
fs.createWriteStream(destination)
).on('finish', () => {
console.log('done extracting!')
})
// The same thing, but skip the key index.
cacache.get.byDigest(cachePath, integrityHash).then(data => {
fs.writeFile(destination, data, err => {
console.log('tarball data fetched based on its sha512sum and written out!')
})
})
```
### Features
* Extraction by key or by content address (shasum, etc)
* [Subresource Integrity](#integrity) web standard support
* Multi-hash support - safely host sha1, sha512, etc, in a single cache
* Automatic content deduplication
* Fault tolerance (immune to corruption, partial writes, process races, etc)
* Consistency guarantees on read and write (full data verification)
* Lockless, high-concurrency cache access
* Streaming support
* Promise support
* Fast -- sub-millisecond reads and writes including verification
* Arbitrary metadata storage
* Garbage collection and additional offline verification
* Thorough test coverage
* There's probably a bloom filter in there somewhere. Those are cool, right? 🤔
### Contributing
The cacache team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! The [Contributor Guide](CONTRIBUTING.md) has all the information you need for everything from reporting bugs to contributing entire new features. Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear.
All participants and maintainers in this project are expected to follow [Code of Conduct](CODE_OF_CONDUCT.md), and just generally be excellent to each other.
Please refer to the [Changelog](CHANGELOG.md) for project history details, too.
Happy hacking!
### API
#### <a name="ls"></a> `> cacache.ls(cache) -> Promise<Object>`
Lists info for all entries currently in the cache as a single large object. Each
entry in the object will be keyed by the unique index key, with corresponding
[`get.info`](#get-info) objects as the values.
##### Example
```javascript
cacache.ls(cachePath).then(console.log)
// Output
{
'my-thing': {
key: 'my-thing',
integrity: 'sha512-BaSe64/EnCoDED+HAsh=='
path: '.testcache/content/deadbeef', // joined with `cachePath`
time: 12345698490,
size: 4023948,
metadata: {
name: 'blah',
version: '1.2.3',
description: 'this was once a package but now it is my-thing'
}
},
'other-thing': {
key: 'other-thing',
integrity: 'sha1-ANothER+hasH=',
path: '.testcache/content/bada55',
time: 11992309289,
size: 111112
}
}
```
#### <a name="ls-stream"></a> `> cacache.ls.stream(cache) -> Readable`
Lists info for all entries currently in the cache as a single large object.
This works just like [`ls`](#ls), except [`get.info`](#get-info) entries are
returned as `'data'` events on the returned stream.
##### Example
```javascript
cacache.ls.stream(cachePath).on('data', console.log)
// Output
{
key: 'my-thing',
integrity: 'sha512-BaSe64HaSh',
path: '.testcache/content/deadbeef', // joined with `cachePath`
time: 12345698490,
size: 13423,
metadata: {
name: 'blah',
version: '1.2.3',
description: 'this was once a package but now it is my-thing'
}
}
{
key: 'other-thing',
integrity: 'whirlpool-WoWSoMuchSupport',
path: '.testcache/content/bada55',
time: 11992309289,
size: 498023984029
}
{
...
}
```
#### <a name="get-data"></a> `> cacache.get(cache, key, [opts]) -> Promise({data, metadata, integrity})`
Returns an object with the cached data, digest, and metadata identified by
`key`. The `data` property of this object will be a `Buffer` instance that
presumably holds some data that means something to you. I'm sure you know what
to do with it! cacache just won't care.
`integrity` is a [Subresource
Integrity](#integrity)
string. That is, a string that can be used to verify `data`, which looks like
`<hash-algorithm>-<base64-integrity-hash>`.
If there is no content identified by `key`, or if the locally-stored data does
not pass the validity checksum, the promise will be rejected.
A sub-function, `get.byDigest` may be used for identical behavior, except lookup
will happen by integrity hash, bypassing the index entirely. This version of the
function *only* returns `data` itself, without any wrapper.
See: [options](#get-options)
##### Note
This function loads the entire cache entry into memory before returning it. If
you're dealing with Very Large data, consider using [`get.stream`](#get-stream)
instead.
##### Example
```javascript
// Look up by key
cache.get(cachePath, 'my-thing').then(console.log)
// Output:
{
metadata: {
thingName: 'my'
},
integrity: 'sha512-BaSe64HaSh',
data: Buffer#<deadbeef>,
size: 9320
}
// Look up by digest
cache.get.byDigest(cachePath, 'sha512-BaSe64HaSh').then(console.log)
// Output:
Buffer#<deadbeef>
```
#### <a name="get-stream"></a> `> cacache.get.stream(cache, key, [opts]) -> Readable`
Returns a [Readable Stream](https://nodejs.org/api/stream.html#stream_readable_streams) of the cached data identified by `key`.
If there is no content identified by `key`, or if the locally-stored data does
not pass the validity checksum, an error will be emitted.
`metadata` and `integrity` events will be emitted before the stream closes, if
you need to collect that extra data about the cached entry.
A sub-function, `get.stream.byDigest` may be used for identical behavior,
except lookup will happen by integrity hash, bypassing the index entirely. This
version does not emit the `metadata` and `integrity` events at all.
See: [options](#get-options)
##### Example
```javascript
// Look up by key
cache.get.stream(
cachePath, 'my-thing'
).on('metadata', metadata => {
console.log('metadata:', metadata)
}).on('integrity', integrity => {
console.log('integrity:', integrity)
}).pipe(
fs.createWriteStream('./x.tgz')
)
// Outputs:
metadata: { ... }
integrity: 'sha512-SoMeDIGest+64=='
// Look up by digest
cache.get.stream.byDigest(
cachePath, 'sha512-SoMeDIGest+64=='
).pipe(
fs.createWriteStream('./x.tgz')
)
```
#### <a name="get-info"></a> `> cacache.get.info(cache, key) -> Promise`
Looks up `key` in the cache index, returning information about the entry if
one exists.
##### Fields
* `key` - Key the entry was looked up under. Matches the `key` argument.
* `integrity` - [Subresource Integrity hash](#integrity) for the content this entry refers to.
* `path` - Filesystem path where content is stored, joined with `cache` argument.
* `time` - Timestamp the entry was first added on.
* `metadata` - User-assigned metadata associated with the entry/content.
##### Example
```javascript
cacache.get.info(cachePath, 'my-thing').then(console.log)
// Output
{
key: 'my-thing',
integrity: 'sha256-MUSTVERIFY+ALL/THINGS=='
path: '.testcache/content/deadbeef',
time: 12345698490,
size: 849234,
metadata: {
name: 'blah',
version: '1.2.3',
description: 'this was once a package but now it is my-thing'
}
}
```
#### <a name="get-hasContent"></a> `> cacache.get.hasContent(cache, integrity) -> Promise`
Looks up a [Subresource Integrity hash](#integrity) in the cache. If content
exists for this `integrity`, it will return an object, with the specific single integrity hash
that was found in `sri` key, and the size of the found content as `size`. If no content exists for this integrity, it will return `false`.
##### Example
```javascript
cacache.get.hasContent(cachePath, 'sha256-MUSTVERIFY+ALL/THINGS==').then(console.log)
// Output
{
sri: {
source: 'sha256-MUSTVERIFY+ALL/THINGS==',
algorithm: 'sha256',
digest: 'MUSTVERIFY+ALL/THINGS==',
options: []
},
size: 9001
}
cacache.get.hasContent(cachePath, 'sha521-NOT+IN/CACHE==').then(console.log)
// Output
false
```
##### <a name="get-options"></a> Options
##### `opts.integrity`
If present, the pre-calculated digest for the inserted content. If this option
is provided and does not match the post-insertion digest, insertion will fail
with an `EINTEGRITY` error.
##### `opts.memoize`
Default: null
If explicitly truthy, cacache will read from memory and memoize data on bulk read. If `false`, cacache will read from disk data. Reader functions by default read from in-memory cache.
##### `opts.size`
If provided, the data stream will be verified to check that enough data was
passed through. If there's more or less data than expected, insertion will fail
with an `EBADSIZE` error.
#### <a name="put-data"></a> `> cacache.put(cache, key, data, [opts]) -> Promise`
Inserts data passed to it into the cache. The returned Promise resolves with a
digest (generated according to [`opts.algorithms`](#optsalgorithms)) after the
cache entry has been successfully written.
See: [options](#put-options)
##### Example
```javascript
fetch(
'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz'
).then(data => {
return cacache.put(cachePath, 'registry.npmjs.org|cacache@1.0.0', data)
}).then(integrity => {
console.log('integrity hash is', integrity)
})
```
#### <a name="put-stream"></a> `> cacache.put.stream(cache, key, [opts]) -> Writable`
Returns a [Writable
Stream](https://nodejs.org/api/stream.html#stream_writable_streams) that inserts
data written to it into the cache. Emits an `integrity` event with the digest of
written contents when it succeeds.
See: [options](#put-options)
##### Example
```javascript
request.get(
'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz'
).pipe(
cacache.put.stream(
cachePath, 'registry.npmjs.org|cacache@1.0.0'
).on('integrity', d => console.log(`integrity digest is ${d}`))
)
```
##### <a name="put-options"></a> Options
##### `opts.metadata`
Arbitrary metadata to be attached to the inserted key.
##### `opts.size`
If provided, the data stream will be verified to check that enough data was
passed through. If there's more or less data than expected, insertion will fail
with an `EBADSIZE` error.
##### `opts.integrity`
If present, the pre-calculated digest for the inserted content. If this option
is provided and does not match the post-insertion digest, insertion will fail
with an `EINTEGRITY` error.
`algorithms` has no effect if this option is present.
##### `opts.algorithms`
Default: ['sha512']
Hashing algorithms to use when calculating the [subresource integrity
digest](#integrity)
for inserted data. Can use any algorithm listed in `crypto.getHashes()` or
`'omakase'`/`'お任せします'` to pick a random hash algorithm on each insertion. You
may also use any anagram of `'modnar'` to use this feature.
Currently only supports one algorithm at a time (i.e., an array length of
exactly `1`). Has no effect if `opts.integrity` is present.
##### `opts.memoize`
Default: null
If provided, cacache will memoize the given cache insertion in memory, bypassing
any filesystem checks for that key or digest in future cache fetches. Nothing
will be written to the in-memory cache unless this option is explicitly truthy.
If `opts.memoize` is an object or a `Map`-like (that is, an object with `get`
and `set` methods), it will be written to instead of the global memoization
cache.
Reading from disk data can be forced by explicitly passing `memoize: false` to
the reader functions, but their default will be to read from memory.
##### `opts.tmpPrefix`
Default: null
Prefix to append on the temporary directory name inside the cache's tmp dir.
#### <a name="rm-all"></a> `> cacache.rm.all(cache) -> Promise`
Clears the entire cache. Mainly by blowing away the cache directory itself.
##### Example
```javascript
cacache.rm.all(cachePath).then(() => {
console.log('THE APOCALYPSE IS UPON US 😱')
})
```
#### <a name="rm-entry"></a> `> cacache.rm.entry(cache, key) -> Promise`
Alias: `cacache.rm`
Removes the index entry for `key`. Content will still be accessible if
requested directly by content address ([`get.stream.byDigest`](#get-stream)).
To remove the content itself (which might still be used by other entries), use
[`rm.content`](#rm-content). Or, to safely vacuum any unused content, use
[`verify`](#verify).
##### Example
```javascript
cacache.rm.entry(cachePath, 'my-thing').then(() => {
console.log('I did not like it anyway')
})
```
#### <a name="rm-content"></a> `> cacache.rm.content(cache, integrity) -> Promise`
Removes the content identified by `integrity`. Any index entries referring to it
will not be usable again until the content is re-added to the cache with an
identical digest.
##### Example
```javascript
cacache.rm.content(cachePath, 'sha512-SoMeDIGest/IN+BaSE64==').then(() => {
console.log('data for my-thing is gone!')
})
```
#### <a name="clear-memoized"></a> `> cacache.clearMemoized()`
Completely resets the in-memory entry cache.
#### <a name="tmp-mkdir"></a> `> tmp.mkdir(cache, opts) -> Promise<Path>`
Returns a unique temporary directory inside the cache's `tmp` dir. This
directory will use the same safe user assignment that all the other stuff use.
Once the directory is made, it's the user's responsibility that all files
within are given the appropriate `gid`/`uid` ownership settings to match
the rest of the cache. If not, you can ask cacache to do it for you by
calling [`tmp.fix()`](#tmp-fix), which will fix all tmp directory
permissions.
If you want automatic cleanup of this directory, use
[`tmp.withTmp()`](#with-tpm)
See: [options](#tmp-options)
##### Example
```javascript
cacache.tmp.mkdir(cache).then(dir => {
fs.writeFile(path.join(dir, 'blablabla'), Buffer#<1234>, ...)
})
```
#### <a name="tmp-fix"></a> `> tmp.fix(cache) -> Promise`
Sets the `uid` and `gid` properties on all files and folders within the tmp
folder to match the rest of the cache.
Use this after manually writing files into [`tmp.mkdir`](#tmp-mkdir) or
[`tmp.withTmp`](#with-tmp).
##### Example
```javascript
cacache.tmp.mkdir(cache).then(dir => {
writeFile(path.join(dir, 'file'), someData).then(() => {
// make sure we didn't just put a root-owned file in the cache
cacache.tmp.fix().then(() => {
// all uids and gids match now
})
})
})
```
#### <a name="with-tmp"></a> `> tmp.withTmp(cache, opts, cb) -> Promise`
Creates a temporary directory with [`tmp.mkdir()`](#tmp-mkdir) and calls `cb`
with it. The created temporary directory will be removed when the return value
of `cb()` resolves, the tmp directory will be automatically deleted once that
promise completes.
The same caveats apply when it comes to managing permissions for the tmp dir's
contents.
See: [options](#tmp-options)
##### Example
```javascript
cacache.tmp.withTmp(cache, dir => {
return fs.writeFileAsync(path.join(dir, 'blablabla'), Buffer#<1234>, ...)
}).then(() => {
// `dir` no longer exists
})
```
##### <a name="tmp-options"></a> Options
##### `opts.tmpPrefix`
Default: null
Prefix to append on the temporary directory name inside the cache's tmp dir.
#### <a name="integrity"></a> Subresource Integrity Digests
For content verification and addressing, cacache uses strings following the
[Subresource
Integrity spec](https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity).
That is, any time cacache expects an `integrity` argument or option, it
should be in the format `<hashAlgorithm>-<base64-hash>`.
One deviation from the current spec is that cacache will support any hash
algorithms supported by the underlying Node.js process. You can use
`crypto.getHashes()` to see which ones you can use.
##### Generating Digests Yourself
If you have an existing content shasum, they are generally formatted as a
hexadecimal string (that is, a sha1 would look like:
`5f5513f8822fdbe5145af33b64d8d970dcf95c6e`). In order to be compatible with
cacache, you'll need to convert this to an equivalent subresource integrity
string. For this example, the corresponding hash would be:
`sha1-X1UT+IIv2+UUWvM7ZNjZcNz5XG4=`.
If you want to generate an integrity string yourself for existing data, you can
use something like this:
```javascript
const crypto = require('crypto')
const hashAlgorithm = 'sha512'
const data = 'foobarbaz'
const integrity = (
hashAlgorithm +
'-' +
crypto.createHash(hashAlgorithm).update(data).digest('base64')
)
```
You can also use [`ssri`](https://npm.im/ssri) to have a richer set of functionality
around SRI strings, including generation, parsing, and translating from existing
hex-formatted strings.
#### <a name="verify"></a> `> cacache.verify(cache, opts) -> Promise`
Checks out and fixes up your cache:
* Cleans up corrupted or invalid index entries.
* Custom entry filtering options.
* Garbage collects any content entries not referenced by the index.
* Checks integrity for all content entries and removes invalid content.
* Fixes cache ownership.
* Removes the `tmp` directory in the cache and all its contents.
When it's done, it'll return an object with various stats about the verification
process, including amount of storage reclaimed, number of valid entries, number
of entries removed, etc.
##### <a name="verify-options"></a> Options
##### `opts.concurrency`
Default: 20
Number of concurrently read files in the filesystem while doing clean up.
##### `opts.filter`
Receives a formatted entry. Return false to remove it.
Note: might be called more than once on the same entry.
##### `opts.log`
Custom logger function:
```
log: { silly () {} }
log.silly('verify', 'verifying cache at', cache)
```
##### Example
```sh
echo somegarbage >> $CACHEPATH/content/deadbeef
```
```javascript
cacache.verify(cachePath).then(stats => {
// deadbeef collected, because of invalid checksum.
console.log('cache is much nicer now! stats:', stats)
})
```
#### <a name="verify-last-run"></a> `> cacache.verify.lastRun(cache) -> Promise`
Returns a `Date` representing the last time `cacache.verify` was run on `cache`.
##### Example
```javascript
cacache.verify(cachePath).then(() => {
cacache.verify.lastRun(cachePath).then(lastTime => {
console.log('cacache.verify was last called on' + lastTime)
})
})
```

255
assets_old/node_modules/cacache/get.js generated vendored Normal file
View file

@ -0,0 +1,255 @@
'use strict'
const util = require('util')
const fs = require('fs')
const index = require('./lib/entry-index')
const memo = require('./lib/memoization')
const read = require('./lib/content/read')
const Minipass = require('minipass')
const Collect = require('minipass-collect')
const Pipeline = require('minipass-pipeline')
const writeFile = util.promisify(fs.writeFile)
module.exports = function get (cache, key, opts) {
return getData(false, cache, key, opts)
}
module.exports.byDigest = function getByDigest (cache, digest, opts) {
return getData(true, cache, digest, opts)
}
function getData (byDigest, cache, key, opts = {}) {
const { integrity, memoize, size } = opts
const memoized = byDigest
? memo.get.byDigest(cache, key, opts)
: memo.get(cache, key, opts)
if (memoized && memoize !== false) {
return Promise.resolve(
byDigest
? memoized
: {
metadata: memoized.entry.metadata,
data: memoized.data,
integrity: memoized.entry.integrity,
size: memoized.entry.size
}
)
}
return (byDigest ? Promise.resolve(null) : index.find(cache, key, opts)).then(
(entry) => {
if (!entry && !byDigest) {
throw new index.NotFoundError(cache, key)
}
return read(cache, byDigest ? key : entry.integrity, {
integrity,
size
})
.then((data) =>
byDigest
? data
: {
data,
metadata: entry.metadata,
size: entry.size,
integrity: entry.integrity
}
)
.then((res) => {
if (memoize && byDigest) {
memo.put.byDigest(cache, key, res, opts)
} else if (memoize) {
memo.put(cache, entry, res.data, opts)
}
return res
})
}
)
}
module.exports.sync = function get (cache, key, opts) {
return getDataSync(false, cache, key, opts)
}
module.exports.sync.byDigest = function getByDigest (cache, digest, opts) {
return getDataSync(true, cache, digest, opts)
}
function getDataSync (byDigest, cache, key, opts = {}) {
const { integrity, memoize, size } = opts
const memoized = byDigest
? memo.get.byDigest(cache, key, opts)
: memo.get(cache, key, opts)
if (memoized && memoize !== false) {
return byDigest
? memoized
: {
metadata: memoized.entry.metadata,
data: memoized.data,
integrity: memoized.entry.integrity,
size: memoized.entry.size
}
}
const entry = !byDigest && index.find.sync(cache, key, opts)
if (!entry && !byDigest) {
throw new index.NotFoundError(cache, key)
}
const data = read.sync(cache, byDigest ? key : entry.integrity, {
integrity: integrity,
size: size
})
const res = byDigest
? data
: {
metadata: entry.metadata,
data: data,
size: entry.size,
integrity: entry.integrity
}
if (memoize && byDigest) {
memo.put.byDigest(cache, key, res, opts)
} else if (memoize) {
memo.put(cache, entry, res.data, opts)
}
return res
}
module.exports.stream = getStream
const getMemoizedStream = (memoized) => {
const stream = new Minipass()
stream.on('newListener', function (ev, cb) {
ev === 'metadata' && cb(memoized.entry.metadata)
ev === 'integrity' && cb(memoized.entry.integrity)
ev === 'size' && cb(memoized.entry.size)
})
stream.end(memoized.data)
return stream
}
function getStream (cache, key, opts = {}) {
const { memoize, size } = opts
const memoized = memo.get(cache, key, opts)
if (memoized && memoize !== false) {
return getMemoizedStream(memoized)
}
const stream = new Pipeline()
index
.find(cache, key)
.then((entry) => {
if (!entry) {
throw new index.NotFoundError(cache, key)
}
stream.emit('metadata', entry.metadata)
stream.emit('integrity', entry.integrity)
stream.emit('size', entry.size)
stream.on('newListener', function (ev, cb) {
ev === 'metadata' && cb(entry.metadata)
ev === 'integrity' && cb(entry.integrity)
ev === 'size' && cb(entry.size)
})
const src = read.readStream(
cache,
entry.integrity,
{ ...opts, size: typeof size !== 'number' ? entry.size : size }
)
if (memoize) {
const memoStream = new Collect.PassThrough()
memoStream.on('collect', data => memo.put(cache, entry, data, opts))
stream.unshift(memoStream)
}
stream.unshift(src)
})
.catch((err) => stream.emit('error', err))
return stream
}
module.exports.stream.byDigest = getStreamDigest
function getStreamDigest (cache, integrity, opts = {}) {
const { memoize } = opts
const memoized = memo.get.byDigest(cache, integrity, opts)
if (memoized && memoize !== false) {
const stream = new Minipass()
stream.end(memoized)
return stream
} else {
const stream = read.readStream(cache, integrity, opts)
if (!memoize) {
return stream
}
const memoStream = new Collect.PassThrough()
memoStream.on('collect', data => memo.put.byDigest(
cache,
integrity,
data,
opts
))
return new Pipeline(stream, memoStream)
}
}
module.exports.info = info
function info (cache, key, opts = {}) {
const { memoize } = opts
const memoized = memo.get(cache, key, opts)
if (memoized && memoize !== false) {
return Promise.resolve(memoized.entry)
} else {
return index.find(cache, key)
}
}
module.exports.hasContent = read.hasContent
function cp (cache, key, dest, opts) {
return copy(false, cache, key, dest, opts)
}
module.exports.copy = cp
function cpDigest (cache, digest, dest, opts) {
return copy(true, cache, digest, dest, opts)
}
module.exports.copy.byDigest = cpDigest
function copy (byDigest, cache, key, dest, opts = {}) {
if (read.copy) {
return (byDigest
? Promise.resolve(null)
: index.find(cache, key, opts)
).then((entry) => {
if (!entry && !byDigest) {
throw new index.NotFoundError(cache, key)
}
return read
.copy(cache, byDigest ? key : entry.integrity, dest, opts)
.then(() => {
return byDigest
? key
: {
metadata: entry.metadata,
size: entry.size,
integrity: entry.integrity
}
})
})
}
return getData(byDigest, cache, key, opts).then((res) => {
return writeFile(dest, byDigest ? res : res.data).then(() => {
return byDigest
? key
: {
metadata: res.metadata,
size: res.size,
integrity: res.integrity
}
})
})
}

41
assets_old/node_modules/cacache/index.js generated vendored Normal file
View file

@ -0,0 +1,41 @@
'use strict'
const ls = require('./ls.js')
const get = require('./get.js')
const put = require('./put.js')
const rm = require('./rm.js')
const verify = require('./verify.js')
const { clearMemoized } = require('./lib/memoization.js')
const tmp = require('./lib/util/tmp.js')
module.exports.ls = ls
module.exports.ls.stream = ls.stream
module.exports.get = get
module.exports.get.byDigest = get.byDigest
module.exports.get.sync = get.sync
module.exports.get.sync.byDigest = get.sync.byDigest
module.exports.get.stream = get.stream
module.exports.get.stream.byDigest = get.stream.byDigest
module.exports.get.copy = get.copy
module.exports.get.copy.byDigest = get.copy.byDigest
module.exports.get.info = get.info
module.exports.get.hasContent = get.hasContent
module.exports.get.hasContent.sync = get.hasContent.sync
module.exports.put = put
module.exports.put.stream = put.stream
module.exports.rm = rm.entry
module.exports.rm.all = rm.all
module.exports.rm.entry = module.exports.rm
module.exports.rm.content = rm.content
module.exports.clearMemoized = clearMemoized
module.exports.tmp = {}
module.exports.tmp.mkdir = tmp.mkdir
module.exports.tmp.withTmp = tmp.withTmp
module.exports.verify = verify
module.exports.verify.lastRun = verify.lastRun

29
assets_old/node_modules/cacache/lib/content/path.js generated vendored Normal file
View file

@ -0,0 +1,29 @@
'use strict'
const contentVer = require('../../package.json')['cache-version'].content
const hashToSegments = require('../util/hash-to-segments')
const path = require('path')
const ssri = require('ssri')
// Current format of content file path:
//
// sha512-BaSE64Hex= ->
// ~/.my-cache/content-v2/sha512/ba/da/55deadbeefc0ffee
//
module.exports = contentPath
function contentPath (cache, integrity) {
const sri = ssri.parse(integrity, { single: true })
// contentPath is the *strongest* algo given
return path.join(
contentDir(cache),
sri.algorithm,
...hashToSegments(sri.hexDigest())
)
}
module.exports.contentDir = contentDir
function contentDir (cache) {
return path.join(cache, `content-v${contentVer}`)
}

250
assets_old/node_modules/cacache/lib/content/read.js generated vendored Normal file
View file

@ -0,0 +1,250 @@
'use strict'
const util = require('util')
const fs = require('fs')
const fsm = require('fs-minipass')
const ssri = require('ssri')
const contentPath = require('./path')
const Pipeline = require('minipass-pipeline')
const lstat = util.promisify(fs.lstat)
const readFile = util.promisify(fs.readFile)
module.exports = read
const MAX_SINGLE_READ_SIZE = 64 * 1024 * 1024
function read (cache, integrity, opts = {}) {
const { size } = opts
return withContentSri(cache, integrity, (cpath, sri) => {
// get size
return lstat(cpath).then(stat => ({ stat, cpath, sri }))
}).then(({ stat, cpath, sri }) => {
if (typeof size === 'number' && stat.size !== size) {
throw sizeError(size, stat.size)
}
if (stat.size > MAX_SINGLE_READ_SIZE) {
return readPipeline(cpath, stat.size, sri, new Pipeline()).concat()
}
return readFile(cpath, null).then((data) => {
if (!ssri.checkData(data, sri)) {
throw integrityError(sri, cpath)
}
return data
})
})
}
const readPipeline = (cpath, size, sri, stream) => {
stream.push(
new fsm.ReadStream(cpath, {
size,
readSize: MAX_SINGLE_READ_SIZE
}),
ssri.integrityStream({
integrity: sri,
size
})
)
return stream
}
module.exports.sync = readSync
function readSync (cache, integrity, opts = {}) {
const { size } = opts
return withContentSriSync(cache, integrity, (cpath, sri) => {
const data = fs.readFileSync(cpath)
if (typeof size === 'number' && size !== data.length) {
throw sizeError(size, data.length)
}
if (ssri.checkData(data, sri)) {
return data
}
throw integrityError(sri, cpath)
})
}
module.exports.stream = readStream
module.exports.readStream = readStream
function readStream (cache, integrity, opts = {}) {
const { size } = opts
const stream = new Pipeline()
withContentSri(cache, integrity, (cpath, sri) => {
// just lstat to ensure it exists
return lstat(cpath).then((stat) => ({ stat, cpath, sri }))
}).then(({ stat, cpath, sri }) => {
if (typeof size === 'number' && size !== stat.size) {
return stream.emit('error', sizeError(size, stat.size))
}
readPipeline(cpath, stat.size, sri, stream)
}, er => stream.emit('error', er))
return stream
}
let copyFile
if (fs.copyFile) {
module.exports.copy = copy
module.exports.copy.sync = copySync
copyFile = util.promisify(fs.copyFile)
}
function copy (cache, integrity, dest) {
return withContentSri(cache, integrity, (cpath, sri) => {
return copyFile(cpath, dest)
})
}
function copySync (cache, integrity, dest) {
return withContentSriSync(cache, integrity, (cpath, sri) => {
return fs.copyFileSync(cpath, dest)
})
}
module.exports.hasContent = hasContent
function hasContent (cache, integrity) {
if (!integrity) {
return Promise.resolve(false)
}
return withContentSri(cache, integrity, (cpath, sri) => {
return lstat(cpath).then((stat) => ({ size: stat.size, sri, stat }))
}).catch((err) => {
if (err.code === 'ENOENT') {
return false
}
if (err.code === 'EPERM') {
/* istanbul ignore else */
if (process.platform !== 'win32') {
throw err
} else {
return false
}
}
})
}
module.exports.hasContent.sync = hasContentSync
function hasContentSync (cache, integrity) {
if (!integrity) {
return false
}
return withContentSriSync(cache, integrity, (cpath, sri) => {
try {
const stat = fs.lstatSync(cpath)
return { size: stat.size, sri, stat }
} catch (err) {
if (err.code === 'ENOENT') {
return false
}
if (err.code === 'EPERM') {
/* istanbul ignore else */
if (process.platform !== 'win32') {
throw err
} else {
return false
}
}
}
})
}
function withContentSri (cache, integrity, fn) {
const tryFn = () => {
const sri = ssri.parse(integrity)
// If `integrity` has multiple entries, pick the first digest
// with available local data.
const algo = sri.pickAlgorithm()
const digests = sri[algo]
if (digests.length <= 1) {
const cpath = contentPath(cache, digests[0])
return fn(cpath, digests[0])
} else {
// Can't use race here because a generic error can happen before a ENOENT error, and can happen before a valid result
return Promise
.all(digests.map((meta) => {
return withContentSri(cache, meta, fn)
.catch((err) => {
if (err.code === 'ENOENT') {
return Object.assign(
new Error('No matching content found for ' + sri.toString()),
{ code: 'ENOENT' }
)
}
return err
})
}))
.then((results) => {
// Return the first non error if it is found
const result = results.find((r) => !(r instanceof Error))
if (result) {
return result
}
// Throw the No matching content found error
const enoentError = results.find((r) => r.code === 'ENOENT')
if (enoentError) {
throw enoentError
}
// Throw generic error
throw results.find((r) => r instanceof Error)
})
}
}
return new Promise((resolve, reject) => {
try {
tryFn()
.then(resolve)
.catch(reject)
} catch (err) {
reject(err)
}
})
}
function withContentSriSync (cache, integrity, fn) {
const sri = ssri.parse(integrity)
// If `integrity` has multiple entries, pick the first digest
// with available local data.
const algo = sri.pickAlgorithm()
const digests = sri[algo]
if (digests.length <= 1) {
const cpath = contentPath(cache, digests[0])
return fn(cpath, digests[0])
} else {
let lastErr = null
for (const meta of digests) {
try {
return withContentSriSync(cache, meta, fn)
} catch (err) {
lastErr = err
}
}
throw lastErr
}
}
function sizeError (expected, found) {
const err = new Error(`Bad data size: expected inserted data to be ${expected} bytes, but got ${found} instead`)
err.expected = expected
err.found = found
err.code = 'EBADSIZE'
return err
}
function integrityError (sri, path) {
const err = new Error(`Integrity verification failed for ${sri} (${path})`)
err.code = 'EINTEGRITY'
err.sri = sri
err.path = path
return err
}

20
assets_old/node_modules/cacache/lib/content/rm.js generated vendored Normal file
View file

@ -0,0 +1,20 @@
'use strict'
const util = require('util')
const contentPath = require('./path')
const { hasContent } = require('./read')
const rimraf = util.promisify(require('rimraf'))
module.exports = rm
function rm (cache, integrity) {
return hasContent(cache, integrity).then((content) => {
// ~pretty~ sure we can't end up with a content lacking sri, but be safe
if (content && content.sri) {
return rimraf(contentPath(cache, content.sri)).then(() => true)
} else {
return false
}
})
}

184
assets_old/node_modules/cacache/lib/content/write.js generated vendored Normal file
View file

@ -0,0 +1,184 @@
'use strict'
const util = require('util')
const contentPath = require('./path')
const fixOwner = require('../util/fix-owner')
const fs = require('fs')
const moveFile = require('../util/move-file')
const Minipass = require('minipass')
const Pipeline = require('minipass-pipeline')
const Flush = require('minipass-flush')
const path = require('path')
const rimraf = util.promisify(require('rimraf'))
const ssri = require('ssri')
const uniqueFilename = require('unique-filename')
const { disposer } = require('./../util/disposer')
const fsm = require('fs-minipass')
const writeFile = util.promisify(fs.writeFile)
module.exports = write
function write (cache, data, opts = {}) {
const { algorithms, size, integrity } = opts
if (algorithms && algorithms.length > 1) {
throw new Error('opts.algorithms only supports a single algorithm for now')
}
if (typeof size === 'number' && data.length !== size) {
return Promise.reject(sizeError(size, data.length))
}
const sri = ssri.fromData(data, algorithms ? { algorithms } : {})
if (integrity && !ssri.checkData(data, integrity, opts)) {
return Promise.reject(checksumError(integrity, sri))
}
return disposer(makeTmp(cache, opts), makeTmpDisposer,
(tmp) => {
return writeFile(tmp.target, data, { flag: 'wx' })
.then(() => moveToDestination(tmp, cache, sri, opts))
})
.then(() => ({ integrity: sri, size: data.length }))
}
module.exports.stream = writeStream
// writes proxied to the 'inputStream' that is passed to the Promise
// 'end' is deferred until content is handled.
class CacacheWriteStream extends Flush {
constructor (cache, opts) {
super()
this.opts = opts
this.cache = cache
this.inputStream = new Minipass()
this.inputStream.on('error', er => this.emit('error', er))
this.inputStream.on('drain', () => this.emit('drain'))
this.handleContentP = null
}
write (chunk, encoding, cb) {
if (!this.handleContentP) {
this.handleContentP = handleContent(
this.inputStream,
this.cache,
this.opts
)
}
return this.inputStream.write(chunk, encoding, cb)
}
flush (cb) {
this.inputStream.end(() => {
if (!this.handleContentP) {
const e = new Error('Cache input stream was empty')
e.code = 'ENODATA'
// empty streams are probably emitting end right away.
// defer this one tick by rejecting a promise on it.
return Promise.reject(e).catch(cb)
}
this.handleContentP.then(
(res) => {
res.integrity && this.emit('integrity', res.integrity)
res.size !== null && this.emit('size', res.size)
cb()
},
(er) => cb(er)
)
})
}
}
function writeStream (cache, opts = {}) {
return new CacacheWriteStream(cache, opts)
}
function handleContent (inputStream, cache, opts) {
return disposer(makeTmp(cache, opts), makeTmpDisposer, (tmp) => {
return pipeToTmp(inputStream, cache, tmp.target, opts)
.then((res) => {
return moveToDestination(
tmp,
cache,
res.integrity,
opts
).then(() => res)
})
})
}
function pipeToTmp (inputStream, cache, tmpTarget, opts) {
let integrity
let size
const hashStream = ssri.integrityStream({
integrity: opts.integrity,
algorithms: opts.algorithms,
size: opts.size
})
hashStream.on('integrity', i => { integrity = i })
hashStream.on('size', s => { size = s })
const outStream = new fsm.WriteStream(tmpTarget, {
flags: 'wx'
})
// NB: this can throw if the hashStream has a problem with
// it, and the data is fully written. but pipeToTmp is only
// called in promisory contexts where that is handled.
const pipeline = new Pipeline(
inputStream,
hashStream,
outStream
)
return pipeline.promise()
.then(() => ({ integrity, size }))
.catch(er => rimraf(tmpTarget).then(() => { throw er }))
}
function makeTmp (cache, opts) {
const tmpTarget = uniqueFilename(path.join(cache, 'tmp'), opts.tmpPrefix)
return fixOwner.mkdirfix(cache, path.dirname(tmpTarget)).then(() => ({
target: tmpTarget,
moved: false
}))
}
function makeTmpDisposer (tmp) {
if (tmp.moved) {
return Promise.resolve()
}
return rimraf(tmp.target)
}
function moveToDestination (tmp, cache, sri, opts) {
const destination = contentPath(cache, sri)
const destDir = path.dirname(destination)
return fixOwner
.mkdirfix(cache, destDir)
.then(() => {
return moveFile(tmp.target, destination)
})
.then(() => {
tmp.moved = true
return fixOwner.chownr(cache, destination)
})
}
function sizeError (expected, found) {
const err = new Error(`Bad data size: expected inserted data to be ${expected} bytes, but got ${found} instead`)
err.expected = expected
err.found = found
err.code = 'EBADSIZE'
return err
}
function checksumError (expected, found) {
const err = new Error(`Integrity check failed:
Wanted: ${expected}
Found: ${found}`)
err.code = 'EINTEGRITY'
err.expected = expected
err.found = found
return err
}

305
assets_old/node_modules/cacache/lib/entry-index.js generated vendored Normal file
View file

@ -0,0 +1,305 @@
'use strict'
const util = require('util')
const crypto = require('crypto')
const fs = require('fs')
const Minipass = require('minipass')
const path = require('path')
const ssri = require('ssri')
const contentPath = require('./content/path')
const fixOwner = require('./util/fix-owner')
const hashToSegments = require('./util/hash-to-segments')
const indexV = require('../package.json')['cache-version'].index
const appendFile = util.promisify(fs.appendFile)
const readFile = util.promisify(fs.readFile)
const readdir = util.promisify(fs.readdir)
module.exports.NotFoundError = class NotFoundError extends Error {
constructor (cache, key) {
super(`No cache entry for ${key} found in ${cache}`)
this.code = 'ENOENT'
this.cache = cache
this.key = key
}
}
module.exports.insert = insert
function insert (cache, key, integrity, opts = {}) {
const { metadata, size } = opts
const bucket = bucketPath(cache, key)
const entry = {
key,
integrity: integrity && ssri.stringify(integrity),
time: Date.now(),
size,
metadata
}
return fixOwner
.mkdirfix(cache, path.dirname(bucket))
.then(() => {
const stringified = JSON.stringify(entry)
// NOTE - Cleverness ahoy!
//
// This works because it's tremendously unlikely for an entry to corrupt
// another while still preserving the string length of the JSON in
// question. So, we just slap the length in there and verify it on read.
//
// Thanks to @isaacs for the whiteboarding session that ended up with this.
return appendFile(bucket, `\n${hashEntry(stringified)}\t${stringified}`)
})
.then(() => fixOwner.chownr(cache, bucket))
.catch((err) => {
if (err.code === 'ENOENT') {
return undefined
}
throw err
// There's a class of race conditions that happen when things get deleted
// during fixOwner, or between the two mkdirfix/chownr calls.
//
// It's perfectly fine to just not bother in those cases and lie
// that the index entry was written. Because it's a cache.
})
.then(() => {
return formatEntry(cache, entry)
})
}
module.exports.insert.sync = insertSync
function insertSync (cache, key, integrity, opts = {}) {
const { metadata, size } = opts
const bucket = bucketPath(cache, key)
const entry = {
key,
integrity: integrity && ssri.stringify(integrity),
time: Date.now(),
size,
metadata
}
fixOwner.mkdirfix.sync(cache, path.dirname(bucket))
const stringified = JSON.stringify(entry)
fs.appendFileSync(bucket, `\n${hashEntry(stringified)}\t${stringified}`)
try {
fixOwner.chownr.sync(cache, bucket)
} catch (err) {
if (err.code !== 'ENOENT') {
throw err
}
}
return formatEntry(cache, entry)
}
module.exports.find = find
function find (cache, key) {
const bucket = bucketPath(cache, key)
return bucketEntries(bucket)
.then((entries) => {
return entries.reduce((latest, next) => {
if (next && next.key === key) {
return formatEntry(cache, next)
} else {
return latest
}
}, null)
})
.catch((err) => {
if (err.code === 'ENOENT') {
return null
} else {
throw err
}
})
}
module.exports.find.sync = findSync
function findSync (cache, key) {
const bucket = bucketPath(cache, key)
try {
return bucketEntriesSync(bucket).reduce((latest, next) => {
if (next && next.key === key) {
return formatEntry(cache, next)
} else {
return latest
}
}, null)
} catch (err) {
if (err.code === 'ENOENT') {
return null
} else {
throw err
}
}
}
module.exports.delete = del
function del (cache, key, opts) {
return insert(cache, key, null, opts)
}
module.exports.delete.sync = delSync
function delSync (cache, key, opts) {
return insertSync(cache, key, null, opts)
}
module.exports.lsStream = lsStream
function lsStream (cache) {
const indexDir = bucketDir(cache)
const stream = new Minipass({ objectMode: true })
readdirOrEmpty(indexDir).then(buckets => Promise.all(
buckets.map(bucket => {
const bucketPath = path.join(indexDir, bucket)
return readdirOrEmpty(bucketPath).then(subbuckets => Promise.all(
subbuckets.map(subbucket => {
const subbucketPath = path.join(bucketPath, subbucket)
// "/cachename/<bucket 0xFF>/<bucket 0xFF>./*"
return readdirOrEmpty(subbucketPath).then(entries => Promise.all(
entries.map(entry => {
const entryPath = path.join(subbucketPath, entry)
return bucketEntries(entryPath).then(entries =>
// using a Map here prevents duplicate keys from
// showing up twice, I guess?
entries.reduce((acc, entry) => {
acc.set(entry.key, entry)
return acc
}, new Map())
).then(reduced => {
// reduced is a map of key => entry
for (const entry of reduced.values()) {
const formatted = formatEntry(cache, entry)
if (formatted) {
stream.write(formatted)
}
}
}).catch(err => {
if (err.code === 'ENOENT') { return undefined }
throw err
})
})
))
})
))
})
))
.then(
() => stream.end(),
err => stream.emit('error', err)
)
return stream
}
module.exports.ls = ls
function ls (cache) {
return lsStream(cache).collect().then(entries =>
entries.reduce((acc, xs) => {
acc[xs.key] = xs
return acc
}, {})
)
}
function bucketEntries (bucket, filter) {
return readFile(bucket, 'utf8').then((data) => _bucketEntries(data, filter))
}
function bucketEntriesSync (bucket, filter) {
const data = fs.readFileSync(bucket, 'utf8')
return _bucketEntries(data, filter)
}
function _bucketEntries (data, filter) {
const entries = []
data.split('\n').forEach((entry) => {
if (!entry) {
return
}
const pieces = entry.split('\t')
if (!pieces[1] || hashEntry(pieces[1]) !== pieces[0]) {
// Hash is no good! Corruption or malice? Doesn't matter!
// EJECT EJECT
return
}
let obj
try {
obj = JSON.parse(pieces[1])
} catch (e) {
// Entry is corrupted!
return
}
if (obj) {
entries.push(obj)
}
})
return entries
}
module.exports.bucketDir = bucketDir
function bucketDir (cache) {
return path.join(cache, `index-v${indexV}`)
}
module.exports.bucketPath = bucketPath
function bucketPath (cache, key) {
const hashed = hashKey(key)
return path.join.apply(
path,
[bucketDir(cache)].concat(hashToSegments(hashed))
)
}
module.exports.hashKey = hashKey
function hashKey (key) {
return hash(key, 'sha256')
}
module.exports.hashEntry = hashEntry
function hashEntry (str) {
return hash(str, 'sha1')
}
function hash (str, digest) {
return crypto
.createHash(digest)
.update(str)
.digest('hex')
}
function formatEntry (cache, entry) {
// Treat null digests as deletions. They'll shadow any previous entries.
if (!entry.integrity) {
return null
}
return {
key: entry.key,
integrity: entry.integrity,
path: contentPath(cache, entry.integrity),
size: entry.size,
time: entry.time,
metadata: entry.metadata
}
}
function readdirOrEmpty (dir) {
return readdir(dir).catch((err) => {
if (err.code === 'ENOENT' || err.code === 'ENOTDIR') {
return []
}
throw err
})
}

74
assets_old/node_modules/cacache/lib/memoization.js generated vendored Normal file
View file

@ -0,0 +1,74 @@
'use strict'
const LRU = require('lru-cache')
const MAX_SIZE = 50 * 1024 * 1024 // 50MB
const MAX_AGE = 3 * 60 * 1000
const MEMOIZED = new LRU({
max: MAX_SIZE,
maxAge: MAX_AGE,
length: (entry, key) => key.startsWith('key:') ? entry.data.length : entry.length
})
module.exports.clearMemoized = clearMemoized
function clearMemoized () {
const old = {}
MEMOIZED.forEach((v, k) => {
old[k] = v
})
MEMOIZED.reset()
return old
}
module.exports.put = put
function put (cache, entry, data, opts) {
pickMem(opts).set(`key:${cache}:${entry.key}`, { entry, data })
putDigest(cache, entry.integrity, data, opts)
}
module.exports.put.byDigest = putDigest
function putDigest (cache, integrity, data, opts) {
pickMem(opts).set(`digest:${cache}:${integrity}`, data)
}
module.exports.get = get
function get (cache, key, opts) {
return pickMem(opts).get(`key:${cache}:${key}`)
}
module.exports.get.byDigest = getDigest
function getDigest (cache, integrity, opts) {
return pickMem(opts).get(`digest:${cache}:${integrity}`)
}
class ObjProxy {
constructor (obj) {
this.obj = obj
}
get (key) {
return this.obj[key]
}
set (key, val) {
this.obj[key] = val
}
}
function pickMem (opts) {
if (!opts || !opts.memoize) {
return MEMOIZED
} else if (opts.memoize.get && opts.memoize.set) {
return opts.memoize
} else if (typeof opts.memoize === 'object') {
return new ObjProxy(opts.memoize)
} else {
return MEMOIZED
}
}

30
assets_old/node_modules/cacache/lib/util/disposer.js generated vendored Normal file
View file

@ -0,0 +1,30 @@
'use strict'
module.exports.disposer = disposer
function disposer (creatorFn, disposerFn, fn) {
const runDisposer = (resource, result, shouldThrow = false) => {
return disposerFn(resource)
.then(
// disposer resolved, do something with original fn's promise
() => {
if (shouldThrow) {
throw result
}
return result
},
// Disposer fn failed, crash process
(err) => {
throw err
// Or process.exit?
})
}
return creatorFn
.then((resource) => {
// fn(resource) can throw, so wrap in a promise here
return Promise.resolve().then(() => fn(resource))
.then((result) => runDisposer(resource, result))
.catch((err) => runDisposer(resource, err, true))
})
}

145
assets_old/node_modules/cacache/lib/util/fix-owner.js generated vendored Normal file
View file

@ -0,0 +1,145 @@
'use strict'
const util = require('util')
const chownr = util.promisify(require('chownr'))
const mkdirp = require('mkdirp')
const inflight = require('promise-inflight')
const inferOwner = require('infer-owner')
// Memoize getuid()/getgid() calls.
// patch process.setuid/setgid to invalidate cached value on change
const self = { uid: null, gid: null }
const getSelf = () => {
if (typeof self.uid !== 'number') {
self.uid = process.getuid()
const setuid = process.setuid
process.setuid = (uid) => {
self.uid = null
process.setuid = setuid
return process.setuid(uid)
}
}
if (typeof self.gid !== 'number') {
self.gid = process.getgid()
const setgid = process.setgid
process.setgid = (gid) => {
self.gid = null
process.setgid = setgid
return process.setgid(gid)
}
}
}
module.exports.chownr = fixOwner
function fixOwner (cache, filepath) {
if (!process.getuid) {
// This platform doesn't need ownership fixing
return Promise.resolve()
}
getSelf()
if (self.uid !== 0) {
// almost certainly can't chown anyway
return Promise.resolve()
}
return Promise.resolve(inferOwner(cache)).then((owner) => {
const { uid, gid } = owner
// No need to override if it's already what we used.
if (self.uid === uid && self.gid === gid) {
return
}
return inflight('fixOwner: fixing ownership on ' + filepath, () =>
chownr(
filepath,
typeof uid === 'number' ? uid : self.uid,
typeof gid === 'number' ? gid : self.gid
).catch((err) => {
if (err.code === 'ENOENT') {
return null
}
throw err
})
)
})
}
module.exports.chownr.sync = fixOwnerSync
function fixOwnerSync (cache, filepath) {
if (!process.getuid) {
// This platform doesn't need ownership fixing
return
}
const { uid, gid } = inferOwner.sync(cache)
getSelf()
if (self.uid !== 0) {
// almost certainly can't chown anyway
return
}
if (self.uid === uid && self.gid === gid) {
// No need to override if it's already what we used.
return
}
try {
chownr.sync(
filepath,
typeof uid === 'number' ? uid : self.uid,
typeof gid === 'number' ? gid : self.gid
)
} catch (err) {
// only catch ENOENT, any other error is a problem.
if (err.code === 'ENOENT') {
return null
}
throw err
}
}
module.exports.mkdirfix = mkdirfix
function mkdirfix (cache, p, cb) {
// we have to infer the owner _before_ making the directory, even though
// we aren't going to use the results, since the cache itself might not
// exist yet. If we mkdirp it, then our current uid/gid will be assumed
// to be correct if it creates the cache folder in the process.
return Promise.resolve(inferOwner(cache)).then(() => {
return mkdirp(p)
.then((made) => {
if (made) {
return fixOwner(cache, made).then(() => made)
}
})
.catch((err) => {
if (err.code === 'EEXIST') {
return fixOwner(cache, p).then(() => null)
}
throw err
})
})
}
module.exports.mkdirfix.sync = mkdirfixSync
function mkdirfixSync (cache, p) {
try {
inferOwner.sync(cache)
const made = mkdirp.sync(p)
if (made) {
fixOwnerSync(cache, made)
return made
}
} catch (err) {
if (err.code === 'EEXIST') {
fixOwnerSync(cache, p)
return null
} else {
throw err
}
}
}

View file

@ -0,0 +1,7 @@
'use strict'
module.exports = hashToSegments
function hashToSegments (hash) {
return [hash.slice(0, 2), hash.slice(2, 4), hash.slice(4)]
}

69
assets_old/node_modules/cacache/lib/util/move-file.js generated vendored Normal file
View file

@ -0,0 +1,69 @@
'use strict'
const fs = require('fs')
const util = require('util')
const chmod = util.promisify(fs.chmod)
const unlink = util.promisify(fs.unlink)
const stat = util.promisify(fs.stat)
const move = require('@npmcli/move-file')
const pinflight = require('promise-inflight')
module.exports = moveFile
function moveFile (src, dest) {
const isWindows = global.__CACACHE_TEST_FAKE_WINDOWS__ ||
process.platform === 'win32'
// This isn't quite an fs.rename -- the assumption is that
// if `dest` already exists, and we get certain errors while
// trying to move it, we should just not bother.
//
// In the case of cache corruption, users will receive an
// EINTEGRITY error elsewhere, and can remove the offending
// content their own way.
//
// Note that, as the name suggests, this strictly only supports file moves.
return new Promise((resolve, reject) => {
fs.link(src, dest, (err) => {
if (err) {
if (isWindows && err.code === 'EPERM') {
// XXX This is a really weird way to handle this situation, as it
// results in the src file being deleted even though the dest
// might not exist. Since we pretty much always write files to
// deterministic locations based on content hash, this is likely
// ok (or at worst, just ends in a future cache miss). But it would
// be worth investigating at some time in the future if this is
// really what we want to do here.
return resolve()
} else if (err.code === 'EEXIST' || err.code === 'EBUSY') {
// file already exists, so whatever
return resolve()
} else {
return reject(err)
}
} else {
return resolve()
}
})
})
.then(() => {
// content should never change for any reason, so make it read-only
return Promise.all([
unlink(src),
!isWindows && chmod(dest, '0444')
])
})
.catch(() => {
return pinflight('cacache-move-file:' + dest, () => {
return stat(dest).catch((err) => {
if (err.code !== 'ENOENT') {
// Something else is wrong here. Bail bail bail
throw err
}
// file doesn't already exist! let's try a rename -> copy fallback
// only delete if it successfully copies
return move(src, dest)
})
})
})
}

35
assets_old/node_modules/cacache/lib/util/tmp.js generated vendored Normal file
View file

@ -0,0 +1,35 @@
'use strict'
const util = require('util')
const fixOwner = require('./fix-owner')
const path = require('path')
const rimraf = util.promisify(require('rimraf'))
const uniqueFilename = require('unique-filename')
const { disposer } = require('./disposer')
module.exports.mkdir = mktmpdir
function mktmpdir (cache, opts = {}) {
const { tmpPrefix } = opts
const tmpTarget = uniqueFilename(path.join(cache, 'tmp'), tmpPrefix)
return fixOwner.mkdirfix(cache, tmpTarget).then(() => {
return tmpTarget
})
}
module.exports.withTmp = withTmp
function withTmp (cache, opts, cb) {
if (!cb) {
cb = opts
opts = {}
}
return disposer(mktmpdir(cache, opts), rimraf, cb)
}
module.exports.fix = fixtmpdir
function fixtmpdir (cache) {
return fixOwner(cache, path.join(cache, 'tmp'))
}

287
assets_old/node_modules/cacache/lib/verify.js generated vendored Normal file
View file

@ -0,0 +1,287 @@
'use strict'
const util = require('util')
const pMap = require('p-map')
const contentPath = require('./content/path')
const fixOwner = require('./util/fix-owner')
const fs = require('fs')
const fsm = require('fs-minipass')
const glob = util.promisify(require('glob'))
const index = require('./entry-index')
const path = require('path')
const rimraf = util.promisify(require('rimraf'))
const ssri = require('ssri')
const hasOwnProperty = (obj, key) =>
Object.prototype.hasOwnProperty.call(obj, key)
const stat = util.promisify(fs.stat)
const truncate = util.promisify(fs.truncate)
const writeFile = util.promisify(fs.writeFile)
const readFile = util.promisify(fs.readFile)
const verifyOpts = (opts) => ({
concurrency: 20,
log: { silly () {} },
...opts
})
module.exports = verify
function verify (cache, opts) {
opts = verifyOpts(opts)
opts.log.silly('verify', 'verifying cache at', cache)
const steps = [
markStartTime,
fixPerms,
garbageCollect,
rebuildIndex,
cleanTmp,
writeVerifile,
markEndTime
]
return steps
.reduce((promise, step, i) => {
const label = step.name
const start = new Date()
return promise.then((stats) => {
return step(cache, opts).then((s) => {
s &&
Object.keys(s).forEach((k) => {
stats[k] = s[k]
})
const end = new Date()
if (!stats.runTime) {
stats.runTime = {}
}
stats.runTime[label] = end - start
return Promise.resolve(stats)
})
})
}, Promise.resolve({}))
.then((stats) => {
stats.runTime.total = stats.endTime - stats.startTime
opts.log.silly(
'verify',
'verification finished for',
cache,
'in',
`${stats.runTime.total}ms`
)
return stats
})
}
function markStartTime (cache, opts) {
return Promise.resolve({ startTime: new Date() })
}
function markEndTime (cache, opts) {
return Promise.resolve({ endTime: new Date() })
}
function fixPerms (cache, opts) {
opts.log.silly('verify', 'fixing cache permissions')
return fixOwner
.mkdirfix(cache, cache)
.then(() => {
// TODO - fix file permissions too
return fixOwner.chownr(cache, cache)
})
.then(() => null)
}
// Implements a naive mark-and-sweep tracing garbage collector.
//
// The algorithm is basically as follows:
// 1. Read (and filter) all index entries ("pointers")
// 2. Mark each integrity value as "live"
// 3. Read entire filesystem tree in `content-vX/` dir
// 4. If content is live, verify its checksum and delete it if it fails
// 5. If content is not marked as live, rimraf it.
//
function garbageCollect (cache, opts) {
opts.log.silly('verify', 'garbage collecting content')
const indexStream = index.lsStream(cache)
const liveContent = new Set()
indexStream.on('data', (entry) => {
if (opts.filter && !opts.filter(entry)) {
return
}
liveContent.add(entry.integrity.toString())
})
return new Promise((resolve, reject) => {
indexStream.on('end', resolve).on('error', reject)
}).then(() => {
const contentDir = contentPath.contentDir(cache)
return glob(path.join(contentDir, '**'), {
follow: false,
nodir: true,
nosort: true
}).then((files) => {
return Promise.resolve({
verifiedContent: 0,
reclaimedCount: 0,
reclaimedSize: 0,
badContentCount: 0,
keptSize: 0
}).then((stats) =>
pMap(
files,
(f) => {
const split = f.split(/[/\\]/)
const digest = split.slice(split.length - 3).join('')
const algo = split[split.length - 4]
const integrity = ssri.fromHex(digest, algo)
if (liveContent.has(integrity.toString())) {
return verifyContent(f, integrity).then((info) => {
if (!info.valid) {
stats.reclaimedCount++
stats.badContentCount++
stats.reclaimedSize += info.size
} else {
stats.verifiedContent++
stats.keptSize += info.size
}
return stats
})
} else {
// No entries refer to this content. We can delete.
stats.reclaimedCount++
return stat(f).then((s) => {
return rimraf(f).then(() => {
stats.reclaimedSize += s.size
return stats
})
})
}
},
{ concurrency: opts.concurrency }
).then(() => stats)
)
})
})
}
function verifyContent (filepath, sri) {
return stat(filepath)
.then((s) => {
const contentInfo = {
size: s.size,
valid: true
}
return ssri
.checkStream(new fsm.ReadStream(filepath), sri)
.catch((err) => {
if (err.code !== 'EINTEGRITY') {
throw err
}
return rimraf(filepath).then(() => {
contentInfo.valid = false
})
})
.then(() => contentInfo)
})
.catch((err) => {
if (err.code === 'ENOENT') {
return { size: 0, valid: false }
}
throw err
})
}
function rebuildIndex (cache, opts) {
opts.log.silly('verify', 'rebuilding index')
return index.ls(cache).then((entries) => {
const stats = {
missingContent: 0,
rejectedEntries: 0,
totalEntries: 0
}
const buckets = {}
for (const k in entries) {
/* istanbul ignore else */
if (hasOwnProperty(entries, k)) {
const hashed = index.hashKey(k)
const entry = entries[k]
const excluded = opts.filter && !opts.filter(entry)
excluded && stats.rejectedEntries++
if (buckets[hashed] && !excluded) {
buckets[hashed].push(entry)
} else if (buckets[hashed] && excluded) {
// skip
} else if (excluded) {
buckets[hashed] = []
buckets[hashed]._path = index.bucketPath(cache, k)
} else {
buckets[hashed] = [entry]
buckets[hashed]._path = index.bucketPath(cache, k)
}
}
}
return pMap(
Object.keys(buckets),
(key) => {
return rebuildBucket(cache, buckets[key], stats, opts)
},
{ concurrency: opts.concurrency }
).then(() => stats)
})
}
function rebuildBucket (cache, bucket, stats, opts) {
return truncate(bucket._path).then(() => {
// This needs to be serialized because cacache explicitly
// lets very racy bucket conflicts clobber each other.
return bucket.reduce((promise, entry) => {
return promise.then(() => {
const content = contentPath(cache, entry.integrity)
return stat(content)
.then(() => {
return index
.insert(cache, entry.key, entry.integrity, {
metadata: entry.metadata,
size: entry.size
})
.then(() => {
stats.totalEntries++
})
})
.catch((err) => {
if (err.code === 'ENOENT') {
stats.rejectedEntries++
stats.missingContent++
return
}
throw err
})
})
}, Promise.resolve())
})
}
function cleanTmp (cache, opts) {
opts.log.silly('verify', 'cleaning tmp directory')
return rimraf(path.join(cache, 'tmp'))
}
function writeVerifile (cache, opts) {
const verifile = path.join(cache, '_lastverified')
opts.log.silly('verify', 'writing verifile to ' + verifile)
try {
return writeFile(verifile, '' + +new Date())
} finally {
fixOwner.chownr.sync(cache, verifile)
}
}
module.exports.lastRun = lastRun
function lastRun (cache) {
return readFile(path.join(cache, '_lastverified'), 'utf8').then(
(data) => new Date(+data)
)
}

6
assets_old/node_modules/cacache/ls.js generated vendored Normal file
View file

@ -0,0 +1,6 @@
'use strict'
const index = require('./lib/entry-index')
module.exports = index.ls
module.exports.stream = index.lsStream

95
assets_old/node_modules/cacache/package.json generated vendored Normal file
View file

@ -0,0 +1,95 @@
{
"name": "cacache",
"version": "15.0.6",
"cache-version": {
"content": "2",
"index": "5"
},
"description": "Fast, fault-tolerant, cross-platform, disk-based, data-agnostic, content-addressable cache.",
"main": "index.js",
"files": [
"*.js",
"lib"
],
"scripts": {
"benchmarks": "node test/benchmarks",
"lint": "standard",
"postrelease": "npm publish",
"posttest": "npm run lint",
"prepublishOnly": "git push --follow-tags",
"prerelease": "npm t",
"release": "standard-version -s",
"test": "tap",
"coverage": "tap",
"test-docker": "docker run -it --rm --name pacotest -v \"$PWD\":/tmp -w /tmp node:latest npm test"
},
"repository": "https://github.com/npm/cacache",
"keywords": [
"cache",
"caching",
"content-addressable",
"sri",
"sri hash",
"subresource integrity",
"cache",
"storage",
"store",
"file store",
"filesystem",
"disk cache",
"disk storage"
],
"author": {
"name": "Kat Marchán",
"email": "kzm@sykosomatic.org",
"twitter": "maybekatz"
},
"contributors": [
{
"name": "Charlotte Spencer",
"email": "charlottelaspencer@gmail.com",
"twitter": "charlotteis"
},
{
"name": "Rebecca Turner",
"email": "me@re-becca.org",
"twitter": "ReBeccaOrg"
}
],
"license": "ISC",
"dependencies": {
"@npmcli/move-file": "^1.0.1",
"chownr": "^2.0.0",
"fs-minipass": "^2.0.0",
"glob": "^7.1.4",
"infer-owner": "^1.0.4",
"lru-cache": "^6.0.0",
"minipass": "^3.1.1",
"minipass-collect": "^1.0.2",
"minipass-flush": "^1.0.5",
"minipass-pipeline": "^1.2.2",
"mkdirp": "^1.0.3",
"p-map": "^4.0.0",
"promise-inflight": "^1.0.1",
"rimraf": "^3.0.2",
"ssri": "^8.0.1",
"tar": "^6.0.2",
"unique-filename": "^1.1.1"
},
"devDependencies": {
"benchmark": "^2.1.4",
"chalk": "^4.0.0",
"require-inject": "^1.4.4",
"standard": "^14.3.1",
"standard-version": "^7.1.0",
"tacks": "^1.3.0",
"tap": "^14.10.6"
},
"tap": {
"100": true,
"test-regex": "test/[^/]*.js"
},
"engines": {
"node": ">= 10"
}
}

84
assets_old/node_modules/cacache/put.js generated vendored Normal file
View file

@ -0,0 +1,84 @@
'use strict'
const index = require('./lib/entry-index')
const memo = require('./lib/memoization')
const write = require('./lib/content/write')
const Flush = require('minipass-flush')
const { PassThrough } = require('minipass-collect')
const Pipeline = require('minipass-pipeline')
const putOpts = (opts) => ({
algorithms: ['sha512'],
...opts
})
module.exports = putData
function putData (cache, key, data, opts = {}) {
const { memoize } = opts
opts = putOpts(opts)
return write(cache, data, opts).then((res) => {
return index
.insert(cache, key, res.integrity, { ...opts, size: res.size })
.then((entry) => {
if (memoize) {
memo.put(cache, entry, data, opts)
}
return res.integrity
})
})
}
module.exports.stream = putStream
function putStream (cache, key, opts = {}) {
const { memoize } = opts
opts = putOpts(opts)
let integrity
let size
let memoData
const pipeline = new Pipeline()
// first item in the pipeline is the memoizer, because we need
// that to end first and get the collected data.
if (memoize) {
const memoizer = new PassThrough().on('collect', data => {
memoData = data
})
pipeline.push(memoizer)
}
// contentStream is a write-only, not a passthrough
// no data comes out of it.
const contentStream = write.stream(cache, opts)
.on('integrity', (int) => {
integrity = int
})
.on('size', (s) => {
size = s
})
pipeline.push(contentStream)
// last but not least, we write the index and emit hash and size,
// and memoize if we're doing that
pipeline.push(new Flush({
flush () {
return index
.insert(cache, key, integrity, { ...opts, size })
.then((entry) => {
if (memoize && memoData) {
memo.put(cache, entry, memoData, opts)
}
if (integrity) {
pipeline.emit('integrity', integrity)
}
if (size) {
pipeline.emit('size', size)
}
})
}
}))
return pipeline
}

31
assets_old/node_modules/cacache/rm.js generated vendored Normal file
View file

@ -0,0 +1,31 @@
'use strict'
const util = require('util')
const index = require('./lib/entry-index')
const memo = require('./lib/memoization')
const path = require('path')
const rimraf = util.promisify(require('rimraf'))
const rmContent = require('./lib/content/rm')
module.exports = entry
module.exports.entry = entry
function entry (cache, key) {
memo.clearMemoized()
return index.delete(cache, key)
}
module.exports.content = content
function content (cache, integrity) {
memo.clearMemoized()
return rmContent(cache, integrity)
}
module.exports.all = all
function all (cache) {
memo.clearMemoized()
return rimraf(path.join(cache, '*(content-*|index-*)'))
}

3
assets_old/node_modules/cacache/verify.js generated vendored Normal file
View file

@ -0,0 +1,3 @@
'use strict'
module.exports = require('./lib/verify')