progress on migrating to heex templates and font-icons

This commit is contained in:
Adam Piontek 2022-08-13 07:32:36 -04:00
parent d43daafdb7
commit 3eff955672
21793 changed files with 2161968 additions and 16895 deletions

View file

@ -0,0 +1,42 @@
## 0.5.2 (2020-12-26)
- Fixed `RangeError: Maximum call stack size exceeded` in `parseChunked()` on very long arrays (corner case)
## 0.5.1 (2020-12-18)
- Fixed `parseChunked()` crash when input has trailing whitespaces
## 0.5.0 (2020-12-05)
- Added support for Node.js 10
## 0.4.0 (2020-12-04)
- Added `parseChunked()` method
- Fixed `stringifyInfo()` to not throw when meet unknown value type
## 0.3.2 (2020-10-26)
- Added missed file for build purposes
## 0.3.1 (2020-10-26)
- Changed build setup to allow building by any bundler that supports `browser` property in `package.json`
- Exposed version
## 0.3.0 (2020-09-28)
- Renamed `info()` method into `stringifyInfo()`
- Fixed lib's distribution setup
## 0.2.0 (2020-09-28)
- Added `dist` version to package (`dist/json-ext.js` and `dist/json-ext.min.js`)
## 0.1.1 (2020-09-08)
- Fixed main entry point
## 0.1.0 (2020-09-08)
- Initial release

21
assets_old/node_modules/@discoveryjs/json-ext/LICENSE generated vendored Normal file
View file

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2020 Roman Dvornov <rdvornov@gmail.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

255
assets_old/node_modules/@discoveryjs/json-ext/README.md generated vendored Normal file
View file

@ -0,0 +1,255 @@
# json-ext
[![NPM version](https://img.shields.io/npm/v/@discoveryjs/json-ext.svg)](https://www.npmjs.com/package/@discoveryjs/json-ext)
[![Build Status](https://travis-ci.org/discoveryjs/json-ext.svg?branch=master)](https://travis-ci.org/discoveryjs/json-ext)
[![Coverage Status](https://coveralls.io/repos/github/discoveryjs/json-ext/badge.svg?branch=master)](https://coveralls.io/github/discoveryjs/json-ext?)
A set of utilities that extend the use of JSON. Designed to be fast and memory efficient
Features:
- [x] `parseChunked()` Parse JSON that comes by chunks (e.g. FS readable stream or fetch response stream)
- [x] `stringifyStream()` Stringify stream (Node.js)
- [x] `stringifyInfo()` Get estimated size and other facts of JSON.stringify() without converting a value to string
- [ ] **TBD** Support for circular references
- [ ] **TBD** Binary representation [branch](https://github.com/discoveryjs/json-ext/tree/binary)
- [ ] **TBD** WHATWG [Streams](https://streams.spec.whatwg.org/) support
## Install
```bash
npm install @discoveryjs/json-ext
```
## API
- [parseChunked(chunkEmitter)](#parsechunkedchunkemitter)
- [stringifyStream(value[, replacer[, space]])](#stringifystreamvalue-replacer-space)
- [stringifyInfo(value[, replacer[, space[, options]]])](#stringifyinfovalue-replacer-space-options)
- [Options](#options)
- [async](#async)
- [continueOnCircular](#continueoncircular)
- [version](#version)
### parseChunked(chunkEmitter)
Works the same as [`JSON.parse()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse) but takes `chunkEmitter` instead of string and returns [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise).
> NOTE: `reviver` parameter is not supported yet, but will be added in next releases.
> NOTE: WHATWG streams aren't supported yet
When to use:
- It's required to avoid freezing the main thread during big JSON parsing, since this process can be distributed in time
- Huge JSON needs to be parsed (e.g. >500MB on Node.js)
- Needed to reduce memory pressure. `JSON.parse()` needs to receive the entire JSON before parsing it. With `parseChunked()` you may parse JSON as first bytes of it comes. This approach helps to avoid storing a huge string in the memory at a single time point and following GC.
[Benchmark](https://github.com/discoveryjs/json-ext/tree/master/benchmarks#parse-chunked)
Usage:
```js
const { parseChunked } = require('@discoveryjs/json-ext');
// as a regular Promise
parseChunked(chunkEmitter)
.then(data => {
/* data is parsed JSON */
});
// using await (keep in mind that not every runtime has a support for top level await)
const data = await parseChunked(chunkEmitter);
```
Parameter `chunkEmitter` can be:
- [`ReadableStream`](https://nodejs.org/dist/latest-v14.x/docs/api/stream.html#stream_readable_streams) (Node.js only)
```js
const fs = require('fs');
const { parseChunked } = require('@discoveryjs/json-ext');
parseChunked(fs.createReadStream('path/to/file.json'))
```
- Generator, async generator or function that returns iterable (chunks). Chunk might be a `string`, `Uint8Array` or `Buffer` (Node.js only):
```js
const { parseChunked } = require('@discoveryjs/json-ext');
const encoder = new TextEncoder();
// generator
parseChunked(function*() {
yield '{ "hello":';
yield Buffer.from(' "wor'); // Node.js only
yield encoder.encode('ld" }'); // returns Uint8Array(5) [ 108, 100, 34, 32, 125 ]
});
// async generator
parseChunked(async function*() {
for await (const chunk of someAsyncSource) {
yield chunk;
}
});
// function that returns iterable
parseChunked(() => ['{ "hello":', ' "world"}'])
```
Using with [fetch()](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API):
```js
async function loadData(url) {
const response = await fetch(url);
const reader = response.body.getReader();
return parseChunked(async function*() {
while (true) {
const { done, value } = await reader.read();
if (done) {
break;
}
yield value;
}
});
}
loadData('https://example.com/data.json')
.then(data => {
/* data is parsed JSON */
})
```
### stringifyStream(value[, replacer[, space]])
Works the same as [`JSON.stringify()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify), but returns an instance of [`ReadableStream`](https://nodejs.org/dist/latest-v14.x/docs/api/stream.html#stream_readable_streams) instead of string.
> NOTE: WHATWG Streams aren't supported yet, so function available for Node.js only for now
Departs from JSON.stringify():
- Outputs `null` when `JSON.stringify()` returns `undefined` (since streams may not emit `undefined`)
- A promise is resolving and the resulting value is stringifying as a regular one
- A stream in non-object mode is piping to output as is
- A stream in object mode is piping to output as an array of objects
When to use:
- Huge JSON needs to be generated (e.g. >500MB on Node.js)
- Needed to reduce memory pressure. `JSON.stringify()` needs to generate the entire JSON before send or write it to somewhere. With `stringifyStream()` you may send a result to somewhere as first bytes of the result appears. This approach helps to avoid storing a huge string in the memory at a single time point.
- The object being serialized contains Promises or Streams (see Usage for examples)
[Benchmark](https://github.com/discoveryjs/json-ext/tree/master/benchmarks#stream-stringifying)
Usage:
```js
const { stringifyStream } = require('@discoveryjs/json-ext');
// handle events
stringifyStream(data)
.on('data', chunk => console.log(chunk))
.on('error', error => consold.error(error))
.on('finish', () => console.log('DONE!'));
// pipe into a stream
stringifyStream(data)
.pipe(writableStream);
```
Using Promise or ReadableStream in serializing object:
```js
const fs = require('fs');
const { stringifyStream } = require('@discoveryjs/json-ext');
// output will be
// {"name":"example","willSerializeResolvedValue":42,"fromFile":[1, 2, 3],"at":{"any":{"level":"promise!"}}}
stringifyStream({
name: 'example',
willSerializeResolvedValue: Promise.resolve(42),
fromFile: fs.createReadStream('path/to/file.json'), // support file content is "[1, 2, 3]", it'll be inserted as it
at: {
any: {
level: new Promise(resolve => setTimeout(() => resolve('promise!'), 100))
}
}
})
// in case several async requests are used in object, it's prefered
// to put fastest requests first, because in this case
stringifyStream({
foo: fetch('http://example.com/request_takes_2s').then(req => req.json()),
bar: fetch('http://example.com/request_takes_5s').then(req => req.json())
});
```
Using with [`WritableStream`](https://nodejs.org/dist/latest-v14.x/docs/api/stream.html#stream_writable_streams) (Node.js only):
```js
const fs = require('fs');
const { stringifyStream } = require('@discoveryjs/json-ext');
// pipe into a console
stringifyStream(data)
.pipe(process.stdout);
// pipe into a file
stringifyStream(data)
.pipe(fs.createWriteStream('path/to/file.json'));
// wrapping into a Promise
new Promise((resolve, reject) => {
stringifyStream(data)
.on('error', reject)
.pipe(stream)
.on('error', reject)
.on('finish', resolve);
});
```
### stringifyInfo(value[, replacer[, space[, options]]])
`value`, `replacer` and `space` arguments are the same as for `JSON.stringify()`.
Result is an object:
```js
{
minLength: Number, // minimal bytes when values is stringified
circular: [...], // list of circular references
duplicate: [...], // list of objects that occur more than once
async: [...] // list of async values, i.e. promises and streams
}
```
Example:
```js
const { stringifyInfo } = require('@discoveryjs/json-ext');
console.log(
stringifyInfo({ test: true }).minLength
);
// > 13
// that equals '{"test":true}'.length
```
#### Options
##### async
Type: `Boolean`
Default: `false`
Collect async values (promises and streams) or not.
##### continueOnCircular
Type: `Boolean`
Default: `false`
Stop collecting info for a value or not whenever circular reference is found. Setting option to `true` allows to find all circular references.
### version
The version of library, e.g. `"0.3.1"`.
## License
MIT

View file

@ -0,0 +1,808 @@
(function (global, factory) {
typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory() :
typeof define === 'function' && define.amd ? define(factory) :
(global = typeof globalThis !== 'undefined' ? globalThis : global || self, global.jsonExt = factory());
}(this, (function () { 'use strict';
var name = "@discoveryjs/json-ext";
var version = "0.5.2";
var description = "A set of utilities that extend the use of JSON";
var keywords = [
"json",
"utils",
"stream",
"async",
"promise",
"stringify",
"info"
];
var author = "Roman Dvornov <rdvornov@gmail.com> (https://github.com/lahmatiy)";
var license = "MIT";
var repository = "discoveryjs/json-ext";
var main = "./src/index";
var browser = {
"./src/stringify-stream.js": "./src/stringify-stream-browser.js",
"./src/text-decoder.js": "./src/text-decoder-browser.js"
};
var scripts = {
test: "mocha --reporter progress",
lint: "eslint src test",
"lint-and-test": "npm run lint && npm test",
build: "rollup --config",
"test:all": "npm run test:src && npm run test:dist",
"test:src": "npm test",
"test:dist": "MODE=dist npm test && MODE=dist-min npm test",
"build-and-test": "npm run build && npm run test:dist",
coverage: "nyc npm test",
travis: "nyc npm run lint-and-test && npm run coveralls",
coveralls: "nyc report --reporter=text-lcov | coveralls",
prepublishOnly: "npm run build"
};
var dependencies = {
};
var devDependencies = {
"@rollup/plugin-commonjs": "^15.1.0",
"@rollup/plugin-json": "^4.1.0",
"@rollup/plugin-node-resolve": "^9.0.0",
chalk: "^4.1.0",
coveralls: "^3.1.0",
eslint: "^7.6.0",
mocha: "^8.1.1",
nyc: "^15.1.0",
rollup: "^2.28.2",
"rollup-plugin-terser": "^7.0.2"
};
var engines = {
node: ">=10.0.0"
};
var files = [
"dist",
"src"
];
var require$$0 = {
name: name,
version: version,
description: description,
keywords: keywords,
author: author,
license: license,
repository: repository,
main: main,
browser: browser,
scripts: scripts,
dependencies: dependencies,
devDependencies: devDependencies,
engines: engines,
files: files
};
const PrimitiveType = 1;
const ObjectType = 2;
const ArrayType = 3;
const PromiseType = 4;
const ReadableStringType = 5;
const ReadableObjectType = 6;
// https://tc39.es/ecma262/#table-json-single-character-escapes
const escapableCharCodeSubstitution = { // JSON Single Character Escape Sequences
0x08: '\\b',
0x09: '\\t',
0x0a: '\\n',
0x0c: '\\f',
0x0d: '\\r',
0x22: '\\\"',
0x5c: '\\\\'
};
function isLeadingSurrogate(code) {
return code >= 0xD800 && code <= 0xDBFF;
}
function isTrailingSurrogate(code) {
return code >= 0xDC00 && code <= 0xDFFF;
}
function isReadableStream(value) {
return (
typeof value.pipe === 'function' &&
typeof value._read === 'function' &&
typeof value._readableState === 'object' && value._readableState !== null
);
}
function replaceValue(holder, key, value, replacer) {
if (value && typeof value.toJSON === 'function') {
value = value.toJSON();
}
if (replacer !== null) {
value = replacer.call(holder, String(key), value);
}
switch (typeof value) {
case 'function':
case 'symbol':
value = undefined;
break;
case 'object':
if (value !== null) {
const cls = value.constructor;
if (cls === String || cls === Number || cls === Boolean) {
value = value.valueOf();
}
}
break;
}
return value;
}
function getTypeNative(value) {
if (value === null || typeof value !== 'object') {
return PrimitiveType;
}
if (Array.isArray(value)) {
return ArrayType;
}
return ObjectType;
}
function getTypeAsync(value) {
if (value === null || typeof value !== 'object') {
return PrimitiveType;
}
if (typeof value.then === 'function') {
return PromiseType;
}
if (isReadableStream(value)) {
return value._readableState.objectMode ? ReadableObjectType : ReadableStringType;
}
if (Array.isArray(value)) {
return ArrayType;
}
return ObjectType;
}
function normalizeReplacer(replacer) {
if (typeof replacer === 'function') {
return replacer;
}
if (Array.isArray(replacer)) {
const whitelist = new Set(replacer
.map(item => typeof item === 'string' || typeof item === 'number' ? String(item) : null)
.filter(item => typeof item === 'string')
);
whitelist.add('');
return (key, value) => whitelist.has(key) ? value : undefined;
}
return null;
}
function normalizeSpace(space) {
if (typeof space === 'number') {
if (!Number.isFinite(space) || space < 1) {
return false;
}
return ' '.repeat(Math.min(space, 10));
}
if (typeof space === 'string') {
return space.slice(0, 10) || false;
}
return false;
}
var utils = {
escapableCharCodeSubstitution,
isLeadingSurrogate,
isTrailingSurrogate,
type: {
PRIMITIVE: PrimitiveType,
PROMISE: PromiseType,
ARRAY: ArrayType,
OBJECT: ObjectType,
STRING_STREAM: ReadableStringType,
OBJECT_STREAM: ReadableObjectType
},
isReadableStream,
replaceValue,
getTypeNative,
getTypeAsync,
normalizeReplacer,
normalizeSpace
};
const {
normalizeReplacer: normalizeReplacer$1,
normalizeSpace: normalizeSpace$1,
replaceValue: replaceValue$1,
getTypeNative: getTypeNative$1,
getTypeAsync: getTypeAsync$1,
isLeadingSurrogate: isLeadingSurrogate$1,
isTrailingSurrogate: isTrailingSurrogate$1,
escapableCharCodeSubstitution: escapableCharCodeSubstitution$1,
type: {
PRIMITIVE,
OBJECT,
ARRAY,
PROMISE,
STRING_STREAM,
OBJECT_STREAM
}
} = utils;
const charLength2048 = Array.from({ length: 2048 }).map((_, code) => {
if (escapableCharCodeSubstitution$1.hasOwnProperty(code)) {
return 2; // \X
}
if (code < 0x20) {
return 6; // \uXXXX
}
return code < 128 ? 1 : 2; // UTF8 bytes
});
function stringLength(str) {
let len = 0;
let prevLeadingSurrogate = false;
for (let i = 0; i < str.length; i++) {
const code = str.charCodeAt(i);
if (code < 2048) {
len += charLength2048[code];
} else if (isLeadingSurrogate$1(code)) {
len += 6; // \uXXXX since no pair with trailing surrogate yet
prevLeadingSurrogate = true;
continue;
} else if (isTrailingSurrogate$1(code)) {
len = prevLeadingSurrogate
? len - 2 // surrogate pair (4 bytes), since we calculate prev leading surrogate as 6 bytes, substruct 2 bytes
: len + 6; // \uXXXX
} else {
len += 3; // code >= 2048 is 3 bytes length for UTF8
}
prevLeadingSurrogate = false;
}
return len + 2; // +2 for quotes
}
function primitiveLength(value) {
switch (typeof value) {
case 'string':
return stringLength(value);
case 'number':
return Number.isFinite(value) ? String(value).length : 4 /* null */;
case 'boolean':
return value ? 4 /* true */ : 5 /* false */;
case 'undefined':
case 'object':
return 4; /* null */
default:
return 0;
}
}
function spaceLength(space) {
space = normalizeSpace$1(space);
return typeof space === 'string' ? space.length : 0;
}
var stringifyInfo = function jsonStringifyInfo(value, replacer, space, options) {
function walk(holder, key, value) {
if (stop) {
return;
}
value = replaceValue$1(holder, key, value, replacer);
let type = getType(value);
// check for circular structure
if (type !== PRIMITIVE && stack.has(value)) {
circular.add(value);
length += 4; // treat as null
if (!options.continueOnCircular) {
stop = true;
}
return;
}
switch (type) {
case PRIMITIVE:
if (value !== undefined || Array.isArray(holder)) {
length += primitiveLength(value);
} else if (holder === root) {
length += 9; // FIXME: that's the length of undefined, should we normalize behaviour to convert it to null?
}
break;
case OBJECT: {
if (visited.has(value)) {
duplicate.add(value);
length += visited.get(value);
break;
}
const valueLength = length;
let entries = 0;
length += 2; // {}
stack.add(value);
for (const property in value) {
if (hasOwnProperty.call(value, property)) {
const prevLength = length;
walk(value, property, value[property]);
if (prevLength !== length) {
// value is printed
length += stringLength(property) + 1; // "property":
entries++;
}
}
}
if (entries > 1) {
length += entries - 1; // commas
}
stack.delete(value);
if (space > 0 && entries > 0) {
length += (1 + (stack.size + 1) * space + 1) * entries; // for each key-value: \n{space}
length += 1 + stack.size * space; // for }
}
visited.set(value, length - valueLength);
break;
}
case ARRAY: {
if (visited.has(value)) {
duplicate.add(value);
length += visited.get(value);
break;
}
const valueLength = length;
length += 2; // []
stack.add(value);
for (let i = 0; i < value.length; i++) {
walk(value, i, value[i]);
}
if (value.length > 1) {
length += value.length - 1; // commas
}
stack.delete(value);
if (space > 0 && value.length > 0) {
length += (1 + (stack.size + 1) * space) * value.length; // for each element: \n{space}
length += 1 + stack.size * space; // for ]
}
visited.set(value, length - valueLength);
break;
}
case PROMISE:
case STRING_STREAM:
async.add(value);
break;
case OBJECT_STREAM:
length += 2; // []
async.add(value);
break;
}
}
replacer = normalizeReplacer$1(replacer);
space = spaceLength(space);
options = options || {};
const visited = new Map();
const stack = new Set();
const duplicate = new Set();
const circular = new Set();
const async = new Set();
const getType = options.async ? getTypeAsync$1 : getTypeNative$1;
const root = { '': value };
let stop = false;
let length = 0;
walk(root, '', value);
return {
minLength: isNaN(length) ? Infinity : length,
circular: [...circular],
duplicate: [...duplicate],
async: [...async]
};
};
var stringifyStreamBrowser = () => {
throw new Error('Method is not supported');
};
var textDecoderBrowser = TextDecoder;
const { isReadableStream: isReadableStream$1 } = utils;
const STACK_OBJECT = 1;
const STACK_ARRAY = 2;
const decoder = new textDecoderBrowser();
function isObject(value) {
return value !== null && typeof value === 'object';
}
function adjustPosition(error, parser) {
if (error.name === 'SyntaxError' && parser.jsonParseOffset) {
error.message = error.message.replace(/at position (\d+)/, (_, pos) =>
'at position ' + (Number(pos) + parser.jsonParseOffset)
);
}
return error;
}
function append(array, elements) {
// Note: Avoid to use array.push(...elements) since it may lead to
// "RangeError: Maximum call stack size exceeded" for a long arrays
const initialLength = array.length;
array.length += elements.length;
for (let i = 0; i < elements.length; i++) {
array[initialLength + i] = elements[i];
}
}
var parseChunked = function(chunkEmitter) {
let parser = new ChunkParser();
if (isObject(chunkEmitter) && isReadableStream$1(chunkEmitter)) {
return new Promise((resolve, reject) => {
chunkEmitter
.on('data', chunk => {
try {
parser.push(chunk);
} catch (e) {
reject(adjustPosition(e, parser));
parser = null;
}
})
.on('error', (e) => {
parser = null;
reject(e);
})
.on('end', () => {
try {
resolve(parser.finish());
} catch (e) {
reject(adjustPosition(e, parser));
} finally {
parser = null;
}
});
});
}
if (typeof chunkEmitter === 'function') {
const iterator = chunkEmitter();
if (isObject(iterator) && (Symbol.iterator in iterator || Symbol.asyncIterator in iterator)) {
return new Promise(async (resolve, reject) => {
try {
for await (const chunk of iterator) {
parser.push(chunk);
}
resolve(parser.finish());
} catch (e) {
reject(adjustPosition(e, parser));
} finally {
parser = null;
}
});
}
}
throw new Error(
'Chunk emitter should be readable stream, generator, ' +
'async generator or function returning an iterable object'
);
};
class ChunkParser {
constructor() {
this.value = undefined;
this.valueStack = null;
this.stack = new Array(100);
this.lastFlushDepth = 0;
this.flushDepth = 0;
this.stateString = false;
this.stateStringEscape = false;
this.pendingByteSeq = null;
this.pendingChunk = null;
this.pos = 0;
this.jsonParseOffset = 0;
}
flush(chunk, start, end) {
let fragment = chunk.slice(start, end);
this.jsonParseOffset = this.pos; // using for position correction in JSON.parse() error if any
// Prepend pending chunk if any
if (this.pendingChunk !== null) {
fragment = this.pendingChunk + fragment;
this.pendingChunk = null;
}
// Skip a comma at the beginning if any
if (fragment[0] === ',') {
fragment = fragment.slice(1);
this.jsonParseOffset++;
}
if (this.flushDepth === this.lastFlushDepth) {
// Depth didn't changed, so it's a root value or entry/element set
if (this.flushDepth > 0) {
this.jsonParseOffset--;
// Append new entries or elements
if (this.stack[this.flushDepth - 1] === STACK_OBJECT) {
Object.assign(this.valueStack.value, JSON.parse('{' + fragment + '}'));
} else {
append(this.valueStack.value, JSON.parse('[' + fragment + ']'));
}
} else {
// That's an entire value on a top level
this.value = JSON.parse(fragment);
this.valueStack = {
value: this.value,
prev: null
};
}
} else if (this.flushDepth > this.lastFlushDepth) {
// Add missed closing brackets/parentheses
for (let i = this.flushDepth - 1; i >= this.lastFlushDepth; i--) {
fragment += this.stack[i] === STACK_OBJECT ? '}' : ']';
}
if (this.lastFlushDepth === 0) {
// That's a root value
this.value = JSON.parse(fragment);
this.valueStack = {
value: this.value,
prev: null
};
} else {
this.jsonParseOffset--;
// Parse fragment and append to current value
if (this.stack[this.lastFlushDepth - 1] === STACK_OBJECT) {
Object.assign(this.valueStack.value, JSON.parse('{' + fragment + '}'));
} else {
append(this.valueStack.value, JSON.parse('[' + fragment + ']'));
}
}
// Move down to the depths to the last object/array, which is current now
for (let i = this.lastFlushDepth || 1; i < this.flushDepth; i++) {
let value = this.valueStack.value;
if (this.stack[i - 1] === STACK_OBJECT) {
// find last entry
let key;
// eslint-disable-next-line curly
for (key in value);
value = value[key];
} else {
// last element
value = value[value.length - 1];
}
this.valueStack = {
value,
prev: this.valueStack
};
}
} else { // this.flushDepth < this.lastFlushDepth
// Add missed opening brackets/parentheses
for (let i = this.lastFlushDepth - 1; i >= this.flushDepth; i--) {
this.jsonParseOffset--;
fragment = (this.stack[i] === STACK_OBJECT ? '{' : '[') + fragment;
}
if (this.stack[this.lastFlushDepth - 1] === STACK_OBJECT) {
Object.assign(this.valueStack.value, JSON.parse(fragment));
} else {
append(this.valueStack.value, JSON.parse(fragment));
}
for (let i = this.lastFlushDepth - 1; i >= this.flushDepth; i--) {
this.valueStack = this.valueStack.prev;
}
}
this.pos += end - start;
this.lastFlushDepth = this.flushDepth;
}
push(chunk) {
if (typeof chunk !== 'string') {
// Suppose chunk is Buffer or Uint8Array
// Prepend uncompleted byte sequence if any
if (this.pendingByteSeq !== null) {
const origRawChunk = chunk;
chunk = new Uint8Array(this.pendingByteSeq.length + origRawChunk.length);
chunk.set(this.pendingByteSeq);
chunk.set(origRawChunk, this.pendingByteSeq.length);
this.pendingByteSeq = null;
}
// In case Buffer/Uint8Array, an input is encoded in UTF8
// Seek for parts of uncompleted UTF8 symbol on the ending
// This makes sense only if we expect more chunks and last char is not multi-bytes
if (chunk[chunk.length - 1] > 127) {
for (let seqLength = 0; seqLength < chunk.length; seqLength++) {
const byte = chunk[chunk.length - 1 - seqLength];
// 10xxxxxx - 2nd, 3rd or 4th byte
// 110xxxxx first byte of 2-byte sequence
// 1110xxxx - first byte of 3-byte sequence
// 11110xxx - first byte of 4-byte sequence
if (byte >> 6 === 3) {
seqLength++;
// If the sequence is really incomplete, then preserve it
// for the future chunk and cut off it from the current chunk
if ((seqLength !== 4 && byte >> 3 === 0b11110) ||
(seqLength !== 3 && byte >> 4 === 0b1110) ||
(seqLength !== 2 && byte >> 5 === 0b110)) {
this.pendingByteSeq = chunk.slice(chunk.length - seqLength);
chunk = chunk.slice(0, -seqLength);
}
break;
}
}
}
// Convert chunk to a string, since single decode per chunk
// is much effective than decode multiple small substrings
chunk = decoder.decode(chunk);
}
const chunkLength = chunk.length;
let lastFlushPoint = 0;
let flushPoint = 0;
// Main scan loop
scan: for (let i = 0; i < chunkLength; i++) {
if (this.stateString) {
for (; i < chunkLength; i++) {
if (this.stateStringEscape) {
this.stateStringEscape = false;
} else {
switch (chunk.charCodeAt(i)) {
case 0x22: /* " */
this.stateString = false;
continue scan;
case 0x5C: /* \ */
this.stateStringEscape = true;
}
}
}
break;
}
switch (chunk.charCodeAt(i)) {
case 0x22: /* " */
this.stateString = true;
this.stateStringEscape = false;
break;
case 0x2C: /* , */
flushPoint = i;
break;
case 0x7B: /* { */
// begin object
flushPoint = i + 1;
this.stack[this.flushDepth++] = STACK_OBJECT;
break;
case 0x5B: /* [ */
// begin array
flushPoint = i + 1;
this.stack[this.flushDepth++] = STACK_ARRAY;
break;
case 0x5D: /* ] */
case 0x7D: /* } */
// end object or array
flushPoint = i + 1;
this.flushDepth--;
if (this.flushDepth < this.lastFlushDepth) {
this.flush(chunk, lastFlushPoint, flushPoint);
lastFlushPoint = flushPoint;
}
break;
}
}
if (flushPoint > lastFlushPoint) {
this.flush(chunk, lastFlushPoint, flushPoint);
}
// Produce pendingChunk if any
if (flushPoint < chunkLength) {
const newPending = chunk.slice(flushPoint, chunkLength);
this.pendingChunk = this.pendingChunk !== null
? this.pendingChunk + newPending
: newPending;
}
}
finish() {
if (this.pendingChunk !== null) {
if (/[^ \t\r\n]/.test(this.pendingChunk)) {
this.flush('', 0, 0);
}
this.pendingChunk = null;
}
return this.value;
}
}
var src = {
version: require$$0.version,
stringifyInfo: stringifyInfo,
stringifyStream: stringifyStreamBrowser,
parseChunked: parseChunked
};
return src;
})));

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,56 @@
{
"name": "@discoveryjs/json-ext",
"version": "0.5.2",
"description": "A set of utilities that extend the use of JSON",
"keywords": [
"json",
"utils",
"stream",
"async",
"promise",
"stringify",
"info"
],
"author": "Roman Dvornov <rdvornov@gmail.com> (https://github.com/lahmatiy)",
"license": "MIT",
"repository": "discoveryjs/json-ext",
"main": "./src/index",
"browser": {
"./src/stringify-stream.js": "./src/stringify-stream-browser.js",
"./src/text-decoder.js": "./src/text-decoder-browser.js"
},
"scripts": {
"test": "mocha --reporter progress",
"lint": "eslint src test",
"lint-and-test": "npm run lint && npm test",
"build": "rollup --config",
"test:all": "npm run test:src && npm run test:dist",
"test:src": "npm test",
"test:dist": "MODE=dist npm test && MODE=dist-min npm test",
"build-and-test": "npm run build && npm run test:dist",
"coverage": "nyc npm test",
"travis": "nyc npm run lint-and-test && npm run coveralls",
"coveralls": "nyc report --reporter=text-lcov | coveralls",
"prepublishOnly": "npm run build"
},
"dependencies": {},
"devDependencies": {
"@rollup/plugin-commonjs": "^15.1.0",
"@rollup/plugin-json": "^4.1.0",
"@rollup/plugin-node-resolve": "^9.0.0",
"chalk": "^4.1.0",
"coveralls": "^3.1.0",
"eslint": "^7.6.0",
"mocha": "^8.1.1",
"nyc": "^15.1.0",
"rollup": "^2.28.2",
"rollup-plugin-terser": "^7.0.2"
},
"engines": {
"node": ">=10.0.0"
},
"files": [
"dist",
"src"
]
}

View file

@ -0,0 +1,6 @@
module.exports = {
version: require('../package.json').version,
stringifyInfo: require('./stringify-info'),
stringifyStream: require('./stringify-stream'),
parseChunked: require('./parse-chunked')
};

View file

@ -0,0 +1,339 @@
const { isReadableStream } = require('./utils');
const TextDecoder = require('./text-decoder');
const STACK_OBJECT = 1;
const STACK_ARRAY = 2;
const decoder = new TextDecoder();
function isObject(value) {
return value !== null && typeof value === 'object';
}
function adjustPosition(error, parser) {
if (error.name === 'SyntaxError' && parser.jsonParseOffset) {
error.message = error.message.replace(/at position (\d+)/, (_, pos) =>
'at position ' + (Number(pos) + parser.jsonParseOffset)
);
}
return error;
}
function append(array, elements) {
// Note: Avoid to use array.push(...elements) since it may lead to
// "RangeError: Maximum call stack size exceeded" for a long arrays
const initialLength = array.length;
array.length += elements.length;
for (let i = 0; i < elements.length; i++) {
array[initialLength + i] = elements[i];
}
}
module.exports = function(chunkEmitter) {
let parser = new ChunkParser();
if (isObject(chunkEmitter) && isReadableStream(chunkEmitter)) {
return new Promise((resolve, reject) => {
chunkEmitter
.on('data', chunk => {
try {
parser.push(chunk);
} catch (e) {
reject(adjustPosition(e, parser));
parser = null;
}
})
.on('error', (e) => {
parser = null;
reject(e);
})
.on('end', () => {
try {
resolve(parser.finish());
} catch (e) {
reject(adjustPosition(e, parser));
} finally {
parser = null;
}
});
});
}
if (typeof chunkEmitter === 'function') {
const iterator = chunkEmitter();
if (isObject(iterator) && (Symbol.iterator in iterator || Symbol.asyncIterator in iterator)) {
return new Promise(async (resolve, reject) => {
try {
for await (const chunk of iterator) {
parser.push(chunk);
}
resolve(parser.finish());
} catch (e) {
reject(adjustPosition(e, parser));
} finally {
parser = null;
}
});
}
}
throw new Error(
'Chunk emitter should be readable stream, generator, ' +
'async generator or function returning an iterable object'
);
};
class ChunkParser {
constructor() {
this.value = undefined;
this.valueStack = null;
this.stack = new Array(100);
this.lastFlushDepth = 0;
this.flushDepth = 0;
this.stateString = false;
this.stateStringEscape = false;
this.pendingByteSeq = null;
this.pendingChunk = null;
this.pos = 0;
this.jsonParseOffset = 0;
}
flush(chunk, start, end) {
let fragment = chunk.slice(start, end);
this.jsonParseOffset = this.pos; // using for position correction in JSON.parse() error if any
// Prepend pending chunk if any
if (this.pendingChunk !== null) {
fragment = this.pendingChunk + fragment;
this.pendingChunk = null;
}
// Skip a comma at the beginning if any
if (fragment[0] === ',') {
fragment = fragment.slice(1);
this.jsonParseOffset++;
}
if (this.flushDepth === this.lastFlushDepth) {
// Depth didn't changed, so it's a root value or entry/element set
if (this.flushDepth > 0) {
this.jsonParseOffset--;
// Append new entries or elements
if (this.stack[this.flushDepth - 1] === STACK_OBJECT) {
Object.assign(this.valueStack.value, JSON.parse('{' + fragment + '}'));
} else {
append(this.valueStack.value, JSON.parse('[' + fragment + ']'));
}
} else {
// That's an entire value on a top level
this.value = JSON.parse(fragment);
this.valueStack = {
value: this.value,
prev: null
};
}
} else if (this.flushDepth > this.lastFlushDepth) {
// Add missed closing brackets/parentheses
for (let i = this.flushDepth - 1; i >= this.lastFlushDepth; i--) {
fragment += this.stack[i] === STACK_OBJECT ? '}' : ']';
}
if (this.lastFlushDepth === 0) {
// That's a root value
this.value = JSON.parse(fragment);
this.valueStack = {
value: this.value,
prev: null
};
} else {
this.jsonParseOffset--;
// Parse fragment and append to current value
if (this.stack[this.lastFlushDepth - 1] === STACK_OBJECT) {
Object.assign(this.valueStack.value, JSON.parse('{' + fragment + '}'));
} else {
append(this.valueStack.value, JSON.parse('[' + fragment + ']'));
}
}
// Move down to the depths to the last object/array, which is current now
for (let i = this.lastFlushDepth || 1; i < this.flushDepth; i++) {
let value = this.valueStack.value;
if (this.stack[i - 1] === STACK_OBJECT) {
// find last entry
let key;
// eslint-disable-next-line curly
for (key in value);
value = value[key];
} else {
// last element
value = value[value.length - 1];
}
this.valueStack = {
value,
prev: this.valueStack
};
}
} else { // this.flushDepth < this.lastFlushDepth
// Add missed opening brackets/parentheses
for (let i = this.lastFlushDepth - 1; i >= this.flushDepth; i--) {
this.jsonParseOffset--;
fragment = (this.stack[i] === STACK_OBJECT ? '{' : '[') + fragment;
}
if (this.stack[this.lastFlushDepth - 1] === STACK_OBJECT) {
Object.assign(this.valueStack.value, JSON.parse(fragment));
} else {
append(this.valueStack.value, JSON.parse(fragment));
}
for (let i = this.lastFlushDepth - 1; i >= this.flushDepth; i--) {
this.valueStack = this.valueStack.prev;
}
}
this.pos += end - start;
this.lastFlushDepth = this.flushDepth;
}
push(chunk) {
if (typeof chunk !== 'string') {
// Suppose chunk is Buffer or Uint8Array
// Prepend uncompleted byte sequence if any
if (this.pendingByteSeq !== null) {
const origRawChunk = chunk;
chunk = new Uint8Array(this.pendingByteSeq.length + origRawChunk.length);
chunk.set(this.pendingByteSeq);
chunk.set(origRawChunk, this.pendingByteSeq.length);
this.pendingByteSeq = null;
}
// In case Buffer/Uint8Array, an input is encoded in UTF8
// Seek for parts of uncompleted UTF8 symbol on the ending
// This makes sense only if we expect more chunks and last char is not multi-bytes
if (chunk[chunk.length - 1] > 127) {
for (let seqLength = 0; seqLength < chunk.length; seqLength++) {
const byte = chunk[chunk.length - 1 - seqLength];
// 10xxxxxx - 2nd, 3rd or 4th byte
// 110xxxxx first byte of 2-byte sequence
// 1110xxxx - first byte of 3-byte sequence
// 11110xxx - first byte of 4-byte sequence
if (byte >> 6 === 3) {
seqLength++;
// If the sequence is really incomplete, then preserve it
// for the future chunk and cut off it from the current chunk
if ((seqLength !== 4 && byte >> 3 === 0b11110) ||
(seqLength !== 3 && byte >> 4 === 0b1110) ||
(seqLength !== 2 && byte >> 5 === 0b110)) {
this.pendingByteSeq = chunk.slice(chunk.length - seqLength);
chunk = chunk.slice(0, -seqLength);
}
break;
}
}
}
// Convert chunk to a string, since single decode per chunk
// is much effective than decode multiple small substrings
chunk = decoder.decode(chunk);
}
const chunkLength = chunk.length;
let lastFlushPoint = 0;
let flushPoint = 0;
// Main scan loop
scan: for (let i = 0; i < chunkLength; i++) {
if (this.stateString) {
for (; i < chunkLength; i++) {
if (this.stateStringEscape) {
this.stateStringEscape = false;
} else {
switch (chunk.charCodeAt(i)) {
case 0x22: /* " */
this.stateString = false;
continue scan;
case 0x5C: /* \ */
this.stateStringEscape = true;
}
}
}
break;
}
switch (chunk.charCodeAt(i)) {
case 0x22: /* " */
this.stateString = true;
this.stateStringEscape = false;
break;
case 0x2C: /* , */
flushPoint = i;
break;
case 0x7B: /* { */
// begin object
flushPoint = i + 1;
this.stack[this.flushDepth++] = STACK_OBJECT;
break;
case 0x5B: /* [ */
// begin array
flushPoint = i + 1;
this.stack[this.flushDepth++] = STACK_ARRAY;
break;
case 0x5D: /* ] */
case 0x7D: /* } */
// end object or array
flushPoint = i + 1;
this.flushDepth--;
if (this.flushDepth < this.lastFlushDepth) {
this.flush(chunk, lastFlushPoint, flushPoint);
lastFlushPoint = flushPoint;
}
break;
}
}
if (flushPoint > lastFlushPoint) {
this.flush(chunk, lastFlushPoint, flushPoint);
}
// Produce pendingChunk if any
if (flushPoint < chunkLength) {
const newPending = chunk.slice(flushPoint, chunkLength);
this.pendingChunk = this.pendingChunk !== null
? this.pendingChunk + newPending
: newPending;
}
}
finish() {
if (this.pendingChunk !== null) {
if (/[^ \t\r\n]/.test(this.pendingChunk)) {
this.flush('', 0, 0);
}
this.pendingChunk = null;
}
return this.value;
}
};

View file

@ -0,0 +1,224 @@
const {
normalizeReplacer,
normalizeSpace,
replaceValue,
getTypeNative,
getTypeAsync,
isLeadingSurrogate,
isTrailingSurrogate,
escapableCharCodeSubstitution,
type: {
PRIMITIVE,
OBJECT,
ARRAY,
PROMISE,
STRING_STREAM,
OBJECT_STREAM
}
} = require('./utils');
const charLength2048 = Array.from({ length: 2048 }).map((_, code) => {
if (escapableCharCodeSubstitution.hasOwnProperty(code)) {
return 2; // \X
}
if (code < 0x20) {
return 6; // \uXXXX
}
return code < 128 ? 1 : 2; // UTF8 bytes
});
function stringLength(str) {
let len = 0;
let prevLeadingSurrogate = false;
for (let i = 0; i < str.length; i++) {
const code = str.charCodeAt(i);
if (code < 2048) {
len += charLength2048[code];
} else if (isLeadingSurrogate(code)) {
len += 6; // \uXXXX since no pair with trailing surrogate yet
prevLeadingSurrogate = true;
continue;
} else if (isTrailingSurrogate(code)) {
len = prevLeadingSurrogate
? len - 2 // surrogate pair (4 bytes), since we calculate prev leading surrogate as 6 bytes, substruct 2 bytes
: len + 6; // \uXXXX
} else {
len += 3; // code >= 2048 is 3 bytes length for UTF8
}
prevLeadingSurrogate = false;
}
return len + 2; // +2 for quotes
}
function primitiveLength(value) {
switch (typeof value) {
case 'string':
return stringLength(value);
case 'number':
return Number.isFinite(value) ? String(value).length : 4 /* null */;
case 'boolean':
return value ? 4 /* true */ : 5 /* false */;
case 'undefined':
case 'object':
return 4; /* null */
default:
return 0;
}
}
function spaceLength(space) {
space = normalizeSpace(space);
return typeof space === 'string' ? space.length : 0;
}
module.exports = function jsonStringifyInfo(value, replacer, space, options) {
function walk(holder, key, value) {
if (stop) {
return;
}
value = replaceValue(holder, key, value, replacer);
let type = getType(value);
// check for circular structure
if (type !== PRIMITIVE && stack.has(value)) {
circular.add(value);
length += 4; // treat as null
if (!options.continueOnCircular) {
stop = true;
}
return;
}
switch (type) {
case PRIMITIVE:
if (value !== undefined || Array.isArray(holder)) {
length += primitiveLength(value);
} else if (holder === root) {
length += 9; // FIXME: that's the length of undefined, should we normalize behaviour to convert it to null?
}
break;
case OBJECT: {
if (visited.has(value)) {
duplicate.add(value);
length += visited.get(value);
break;
}
const valueLength = length;
let entries = 0;
length += 2; // {}
stack.add(value);
for (const property in value) {
if (hasOwnProperty.call(value, property)) {
const prevLength = length;
walk(value, property, value[property]);
if (prevLength !== length) {
// value is printed
length += stringLength(property) + 1; // "property":
entries++;
}
}
}
if (entries > 1) {
length += entries - 1; // commas
}
stack.delete(value);
if (space > 0 && entries > 0) {
length += (1 + (stack.size + 1) * space + 1) * entries; // for each key-value: \n{space}
length += 1 + stack.size * space; // for }
}
visited.set(value, length - valueLength);
break;
}
case ARRAY: {
if (visited.has(value)) {
duplicate.add(value);
length += visited.get(value);
break;
}
const valueLength = length;
length += 2; // []
stack.add(value);
for (let i = 0; i < value.length; i++) {
walk(value, i, value[i]);
}
if (value.length > 1) {
length += value.length - 1; // commas
}
stack.delete(value);
if (space > 0 && value.length > 0) {
length += (1 + (stack.size + 1) * space) * value.length; // for each element: \n{space}
length += 1 + stack.size * space; // for ]
}
visited.set(value, length - valueLength);
break;
}
case PROMISE:
case STRING_STREAM:
async.add(value);
break;
case OBJECT_STREAM:
length += 2; // []
async.add(value);
break;
}
}
replacer = normalizeReplacer(replacer);
space = spaceLength(space);
options = options || {};
const visited = new Map();
const stack = new Set();
const duplicate = new Set();
const circular = new Set();
const async = new Set();
const getType = options.async ? getTypeAsync : getTypeNative;
const root = { '': value };
let stop = false;
let length = 0;
walk(root, '', value);
return {
minLength: isNaN(length) ? Infinity : length,
circular: [...circular],
duplicate: [...duplicate],
async: [...async]
};
};

View file

@ -0,0 +1,3 @@
module.exports = () => {
throw new Error('Method is not supported');
};

View file

@ -0,0 +1,394 @@
const { Readable } = require('stream');
const {
normalizeReplacer,
normalizeSpace,
replaceValue,
getTypeAsync,
type: {
PRIMITIVE,
OBJECT,
ARRAY,
PROMISE,
STRING_STREAM,
OBJECT_STREAM
}
} = require('./utils');
const noop = () => {};
// TODO: Remove when drop support for Node.js 10
// Node.js 10 has no well-formed JSON.stringify()
// https://github.com/tc39/proposal-well-formed-stringify
// Adopted code from https://bugs.chromium.org/p/v8/issues/detail?id=7782#c12
const wellformedStringStringify = JSON.stringify('\ud800') === '"\\ud800"'
? JSON.stringify
: s => JSON.stringify(s).replace(
/\p{Surrogate}/gu,
m => `\\u${m.charCodeAt(0).toString(16)}`
);
function push() {
this.push(this._stack.value);
this.popStack();
}
function pushPrimitive(value) {
switch (typeof value) {
case 'string':
this.push(this.encodeString(value));
break;
case 'number':
this.push(Number.isFinite(value) ? this.encodeNumber(value) : 'null');
break;
case 'boolean':
this.push(value ? 'true' : 'false');
break;
case 'undefined':
case 'object': // typeof null === 'object'
this.push('null');
break;
default:
this.destroy(new TypeError(`Do not know how to serialize a ${value.constructor && value.constructor.name || typeof value}`));
}
}
function processObjectEntry(key) {
const current = this._stack;
if (!current.first) {
current.first = true;
} else {
this.push(',');
}
if (this.space) {
this.push(`\n${this.space.repeat(this._depth)}${this.encodeString(key)}: `);
} else {
this.push(this.encodeString(key) + ':');
}
}
function processObject() {
const current = this._stack;
// when no keys left, remove obj from stack
if (current.index === current.keys.length) {
if (this.space && current.first) {
this.push(`\n${this.space.repeat(this._depth - 1)}}`);
} else {
this.push('}');
}
this.popStack();
return;
}
const key = current.keys[current.index];
this.processValue(current.value, key, current.value[key], processObjectEntry);
current.index++;
}
function processArrayItem(index) {
if (index !== 0) {
this.push(',');
}
if (this.space) {
this.push(`\n${this.space.repeat(this._depth)}`);
}
}
function processArray() {
const current = this._stack;
if (current.index === current.value.length) {
if (this.space && current.index > 0) {
this.push(`\n${this.space.repeat(this._depth - 1)}]`);
} else {
this.push(']');
}
this.popStack();
return;
}
this.processValue(current.value, current.index, current.value[current.index], processArrayItem);
current.index++;
}
function createStreamReader(fn) {
return function() {
const current = this._stack;
const data = current.value.read(this._readSize);
if (data !== null) {
current.first = false;
fn.call(this, data, current);
} else {
if (current.first && !current.value._readableState.reading) {
this.popStack();
} else {
current.first = true;
current.awaiting = true;
}
}
};
}
const processReadableObject = createStreamReader(function(data, current) {
this.processValue(current.value, current.index, data, processArrayItem);
current.index++;
});
const processReadableString = createStreamReader(function(data) {
this.push(data);
});
class JsonStringifyStream extends Readable {
constructor(value, replacer, space) {
super({
autoDestroy: true
});
this.replacer = normalizeReplacer(replacer);
this.space = normalizeSpace(space);
this._depth = 0;
this.error = null;
this._processing = false;
this._ended = false;
this._readSize = 0;
this._buffer = '';
this._stack = null;
this._visited = new WeakSet();
this.pushStack({
handler: () => {
this.popStack();
this.processValue({ '': value }, '', value, noop);
}
});
}
encodeString(value) {
if (/[^\x20-\uD799]|[\x22\x5c]/.test(value)) {
return wellformedStringStringify(value);
}
return '"' + value + '"';
}
encodeNumber(value) {
return value;
}
processValue(holder, key, value, callback) {
value = replaceValue(holder, key, value, this.replacer);
let type = getTypeAsync(value);
switch (type) {
case PRIMITIVE:
if (callback !== processObjectEntry || value !== undefined) {
callback.call(this, key);
pushPrimitive.call(this, value);
}
break;
case OBJECT:
callback.call(this, key);
// check for circular structure
if (this._visited.has(value)) {
return this.destroy(new TypeError('Converting circular structure to JSON'));
}
this._visited.add(value);
this._depth++;
this.push('{');
this.pushStack({
handler: processObject,
value,
index: 0,
first: false,
keys: Object.keys(value)
});
break;
case ARRAY:
callback.call(this, key);
// check for circular structure
if (this._visited.has(value)) {
return this.destroy(new TypeError('Converting circular structure to JSON'));
}
this._visited.add(value);
this.push('[');
this.pushStack({
handler: processArray,
value,
index: 0
});
this._depth++;
break;
case PROMISE:
this.pushStack({
handler: noop,
awaiting: true
});
Promise.resolve(value)
.then(resolved => {
this.popStack();
this.processValue(holder, key, resolved, callback);
this.processStack();
})
.catch(error => {
this.destroy(error);
});
break;
case STRING_STREAM:
case OBJECT_STREAM:
callback.call(this, key);
// TODO: Remove when drop support for Node.js 10
// Used `_readableState.endEmitted` as fallback, since Node.js 10 has no `readableEnded` getter
if (value.readableEnded || value._readableState.endEmitted) {
return this.destroy(new Error('Readable Stream has ended before it was serialized. All stream data have been lost'));
}
if (value.readableFlowing) {
return this.destroy(new Error('Readable Stream is in flowing mode, data may have been lost. Trying to pause stream.'));
}
if (type === OBJECT_STREAM) {
this.push('[');
this.pushStack({
handler: push,
value: this.space ? '\n' + this.space.repeat(this._depth) + ']' : ']'
});
this._depth++;
}
const self = this.pushStack({
handler: type === OBJECT_STREAM ? processReadableObject : processReadableString,
value,
index: 0,
first: false,
awaiting: !value.readable || value.readableLength === 0
});
const continueProcessing = () => {
if (self.awaiting) {
self.awaiting = false;
this.processStack();
}
};
value.once('error', error => this.destroy(error));
value.once('end', continueProcessing);
value.on('readable', continueProcessing);
break;
}
}
pushStack(node) {
node.prev = this._stack;
return this._stack = node;
}
popStack() {
const { handler, value } = this._stack;
if (handler === processObject || handler === processArray || handler === processReadableObject) {
this._visited.delete(value);
this._depth--;
}
this._stack = this._stack.prev;
}
processStack() {
if (this._processing || this._ended) {
return;
}
try {
this._processing = true;
while (this._stack !== null && !this._stack.awaiting) {
this._stack.handler.call(this);
if (!this._processing) {
return;
}
}
this._processing = false;
} catch (error) {
this.destroy(error);
return;
}
if (this._stack === null && !this._ended) {
this._finish();
this.push(null);
}
}
push(data) {
if (data !== null) {
this._buffer += data;
// check buffer overflow
if (this._buffer.length < this._readSize) {
return;
}
// flush buffer
data = this._buffer;
this._buffer = '';
this._processing = false;
}
super.push(data);
}
_read(size) {
// start processing
this._readSize = size || this.readableHighWaterMark;
this.processStack();
}
_finish() {
this._ended = true;
this._processing = false;
this._stack = null;
this._visited = null;
if (this._buffer && this._buffer.length) {
super.push(this._buffer); // flush buffer
}
this._buffer = '';
}
_destroy(error, cb) {
this.error = this.error || error;
this._finish();
cb(error);
}
}
module.exports = function createJsonStringifyStream(value, replacer, space) {
return new JsonStringifyStream(value, replacer, space);
};

View file

@ -0,0 +1 @@
module.exports = TextDecoder;

View file

@ -0,0 +1 @@
module.exports = require('util').TextDecoder;

View file

@ -0,0 +1,148 @@
const PrimitiveType = 1;
const ObjectType = 2;
const ArrayType = 3;
const PromiseType = 4;
const ReadableStringType = 5;
const ReadableObjectType = 6;
// https://tc39.es/ecma262/#table-json-single-character-escapes
const escapableCharCodeSubstitution = { // JSON Single Character Escape Sequences
0x08: '\\b',
0x09: '\\t',
0x0a: '\\n',
0x0c: '\\f',
0x0d: '\\r',
0x22: '\\\"',
0x5c: '\\\\'
};
function isLeadingSurrogate(code) {
return code >= 0xD800 && code <= 0xDBFF;
}
function isTrailingSurrogate(code) {
return code >= 0xDC00 && code <= 0xDFFF;
}
function isReadableStream(value) {
return (
typeof value.pipe === 'function' &&
typeof value._read === 'function' &&
typeof value._readableState === 'object' && value._readableState !== null
);
}
function replaceValue(holder, key, value, replacer) {
if (value && typeof value.toJSON === 'function') {
value = value.toJSON();
}
if (replacer !== null) {
value = replacer.call(holder, String(key), value);
}
switch (typeof value) {
case 'function':
case 'symbol':
value = undefined;
break;
case 'object':
if (value !== null) {
const cls = value.constructor;
if (cls === String || cls === Number || cls === Boolean) {
value = value.valueOf();
}
}
break;
}
return value;
}
function getTypeNative(value) {
if (value === null || typeof value !== 'object') {
return PrimitiveType;
}
if (Array.isArray(value)) {
return ArrayType;
}
return ObjectType;
}
function getTypeAsync(value) {
if (value === null || typeof value !== 'object') {
return PrimitiveType;
}
if (typeof value.then === 'function') {
return PromiseType;
}
if (isReadableStream(value)) {
return value._readableState.objectMode ? ReadableObjectType : ReadableStringType;
}
if (Array.isArray(value)) {
return ArrayType;
}
return ObjectType;
}
function normalizeReplacer(replacer) {
if (typeof replacer === 'function') {
return replacer;
}
if (Array.isArray(replacer)) {
const whitelist = new Set(replacer
.map(item => typeof item === 'string' || typeof item === 'number' ? String(item) : null)
.filter(item => typeof item === 'string')
);
whitelist.add('');
return (key, value) => whitelist.has(key) ? value : undefined;
}
return null;
}
function normalizeSpace(space) {
if (typeof space === 'number') {
if (!Number.isFinite(space) || space < 1) {
return false;
}
return ' '.repeat(Math.min(space, 10));
}
if (typeof space === 'string') {
return space.slice(0, 10) || false;
}
return false;
}
module.exports = {
escapableCharCodeSubstitution,
isLeadingSurrogate,
isTrailingSurrogate,
type: {
PRIMITIVE: PrimitiveType,
PROMISE: PromiseType,
ARRAY: ArrayType,
OBJECT: ObjectType,
STRING_STREAM: ReadableStringType,
OBJECT_STREAM: ReadableObjectType
},
isReadableStream,
replaceValue,
getTypeNative,
getTypeAsync,
normalizeReplacer,
normalizeSpace
};