Yarn install for private module failing – unexpected end of file

Hi all,

What is the current behavior?
I’m currently trying to install a private module using yarn. However, I get the following error when I do:

yarn install
yarn install v1.17.3
[1/5] Validating package.json…
[2/5] Resolving packages…
[3/5] Fetching packages…
error An unexpected error occurred: “https://registry.yarnpkg.com/@private/ngffwd-node-processes/-/ngffwd-node-processes-1.0.123.tgz: unexpected end of file”.
info If you think this is a bug, please open a bug report with the information provided in “/opt/atlassian/pipelines/agent/build/yarn-error.log”.
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.

If the current behavior is a bug, please provide the steps to reproduce.
The commands I’m running are the following:

npm config set scripts-prepend-node-path auto
yarn install

What is the expected behavior?
We should be able to install the package with no issue. Previously we’ve been able to do, however, recently that has not been the case. We tried going back to previous versions with no seeming improvement. We don’t want to use npm install as that will affect the yarn.lock, but we are getting notifications that the version is available from npm.

The yarn-error.log doesn’t yield much either, so I don’t want to paste it here.

Please mention your node.js, yarn and operating system version.
npm – v6.9.0
node – v10.16.3
yarn – 1.17.3

Thanks any help or suggestions would be appreciated!

Author: Fantashit

20 thoughts on “Yarn install for private module failing – unexpected end of file

  1. This has been hitting us pretty frequently, I was able to reproduce it with curl and it seems like the server is randomly closing the connection:

    ❯ curl -H 'Authorization: Bearer <token>' https://registry.yarnpkg.com/@org/package.tgz -i > package.tgz
      %% Total    %% Received %% Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
     47 6648k   47 3169k    0     0  2640k      0  0:00:02  0:00:01  0:00:01 2638k
    curl: (92) HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err 2)
    

    The only other weird thing is that the headers have age: 0 on the failed request. Maybe the issue occurs when the proxy doesn’t have the package cached?

  2. Hi all,

    So just to update everyone, we figured out a way to fix the failures for now. What we have done so far was cache the node currently in our CLI tool (Bitbucket Pipelines) and that seem to throw yarn off when it was doing the install.

    Instead, going forward we removed the cache for node and everything seems to work again. We’re still not quite sure why that is the case, maybe something with the way yarn handles their node npm cache or the proxy as @meshulam mentioned doesn’t have the package cached.

    For us this is okay for now, but I’ll keep this ticket opened since everyone has other weird error symptoms that I think is still worth addressing and understanding.

    Thanks everyone again for the feedback!

  3. I’ve been running npm cache clean --force to overcome this problem for now

    EDIT: Might have just been a fluke and rerunning yarn install worked

  4. We too are also having this problem – sometimes on 50%% of our CI builds. What is frustrating is there is no helpful information in the error log or error message. It mostly surfaces in our CircleCI builds. But, I have also experienced it locally on my system once in a while. For us, it always happens with the same private package which we publish to NPM.

  5. Yes we also found what @Mustack noted to be the case. It seems our private repo was throwing the integrity check sum error. However, Yarn was not outputting the correct error messaging which threw us off the right trial. We have since transitioned back to npm at this point. We are currently working with NPM support to figure out the main cause of the issue.

  6. Hi, we have the same issue and npm support answered it is due to the amount of versions we have for this package, causing their servers to fail.
    We are working with them to remove old versions.

  7. @pleunv not yet at the moment, we did send over the info to them.

    @Fley we also tried to create a new package off the existing one, but the new one package threw the issue up immediately.

    At this point we’re investigating to see if there is something wrong with our private package.

  8. npm ci / npm install would work because it ignores the yarn.lock file completely, and uses registry.npmjs.org.

    Since this is an issue with the registry and not the local cache, no yarn install / check / cache command can fix this. The only solution with the yarn registry seems to be to wait until the issue with the affected package is resolved (e.g. wait 10 minutes to an hour), and retry.

    The best solution I’ve found is to switch to registry.npmjs.org, while still using the yarn tooling. This can be done per repo:

    echo 'registry "https://registry.npmjs.org"' > .yarnrc # in the repo, not the global file
    sed -i 's#registry.yarnpkg.com#registry.npmjs.org#' yarn.lock

    No change to the workflow is required: yarn install etc keeps on working as it used to.

    It’s a bit of a pain to do over dozens of repos and branches, but worth it to avoid this issue on CI builds.

    There doesn’t seem be any good reason to use registry.yarnpkg.com anymore (see #5891 for some discussion) – the main reason it’s kept is to not break existing builds (which are actually pretty-much broken because of this issue), so I’m doing this change for all my repos going forward.

    Update: This does not fix the issue at all.

  9. Unfortunately we did reproduce the issue using registry.npmjs.org. So it seems unrelated to registry.yarnpkg.com to me.

    [2/4] Fetching packages...
    error An unexpected error occurred: "https://registry.npmjs.org/@private/somepackage/-/somepackage-4.1.4.tgz: unexpected end of file".
    
  10. Indeed, we have exactly the same issue with npm install as with yarn install (only the error returned is different) so it seems to originate from the npm registry instead.

  11. This happens to us a lot in a k8s gitlab runner. And similar things happen with pypi (python) repository. But pip automatically retries the download in such cases.

    I think it should not really matter what causes the issue – they are bound to happen if you have multiple virtualization/traffic routing layers – yarn should be able to handle such issues itself by default.

    yarn install --network-timeout 100000
    yarn install v1.17.3
    [1/4] Resolving packages...
    [2/4] Fetching packages...
    info If you think this is a bug, please open a bug report with the information provided in ... 
    info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
    error An unexpected error occurred: "https://registry.yarnpkg.com/antd/-/antd-3.24.2.tgz: unexpected end of file".
    
  12. Okay so this is clearly a “server terminating the connection early” issue from all the comments and I don’t think sharing more anecdotes will do anything to help fix the problem or help others.

    I’ll either close and lock the issue to avoid more thrash or we can talk about trying to implement a retry mechanism if yarn can detect an unexpectedly closed connection.

    Anyone volunteering to give the retry option a shot? Fetching is done here:

    async fetchFromExternal(): Promise<FetchedOverride> {
    const registry = this.config.registries[this.registry];
    try {
    const headers = this.requestHeaders();
    return await registry.request(
    this.reference,
    {
    headers: {
    ‘Accept-Encoding’: ‘gzip’,
    headers,
    },
    buffer: true,
    process: (req, resolve, reject) => {
    // should we save this to the offline cache?
    const tarballMirrorPath = this.getTarballMirrorPath();
    const tarballCachePath = this.getTarballCachePath();
    const {hashValidateStream, integrityValidateStream, extractorStream} = this.createExtractor(
    resolve,
    reject,
    );
    req.pipe(hashValidateStream);
    hashValidateStream.pipe(integrityValidateStream);
    if (tarballMirrorPath) {
    integrityValidateStream.pipe(fs.createWriteStream(tarballMirrorPath)).on(‘error’, reject);
    }
    if (tarballCachePath) {
    integrityValidateStream.pipe(fs.createWriteStream(tarballCachePath)).on(‘error’, reject);
    }
    integrityValidateStream.pipe(extractorStream).on(‘error’, reject);
    },
    },
    this.packageName,
    );
    } catch (err) {
    const tarballMirrorPath = this.getTarballMirrorPath();
    const tarballCachePath = this.getTarballCachePath();
    if (tarballMirrorPath && (await fsUtil.exists(tarballMirrorPath))) {
    await fsUtil.unlink(tarballMirrorPath);
    }
    if (tarballCachePath && (await fsUtil.exists(tarballCachePath))) {
    await fsUtil.unlink(tarballCachePath);
    }
    throw err;
    }
    }

    and these lines are actually implementing some retry logic when the server fails “properly”:

    if (res.statusCode === 408 || res.statusCode >= 500) {
    const description = `${res.statusCode} ${http.STATUS_CODES[res.statusCode]}`;
    if (!queueForRetry(this.reporter.lang(‘internalServerErrorRetrying’, description))) {
    throw new ResponseError(this.reporter.lang(‘requestFailed’, description), res.statusCode);
    } else {
    return;
    }
    }
    if (res.statusCode === 401 && res.headers[‘www-authenticate’]) {
    const authMethods = res.headers[‘www-authenticate’].split(/,\s*/).map(s => s.toLowerCase());
    if (authMethods.indexOf(‘otp’) !== -1) {
    reject(new OneTimePasswordError());
    return;
    }
    }
    if (body && typeof body.error === ‘string’) {
    reject(new Error(body.error));
    return;
    }
    if ([400, 401, 404].concat(params.rejectStatusCode || []).indexOf(res.statusCode) !== -1) {
    // So this is actually a rejection … the hosted git resolver uses this to know whether http is supported
    resolve(false);
    } else if (res.statusCode >= 400) {
    const errMsg = (body && body.message) || reporter.lang(‘requestError’, params.url, res.statusCode);
    reject(new Error(errMsg));
    } else {
    resolve(body);
    }
    };
    }

    We can probably add a length check there by comparing the actual response size to the content-length header from the server or catch ungraceful TCP terminations somehow and catch them.

  13. I added a PR which should help mitigate the issue. However, it will only reduce the failures, not eliminate them. The same error often happens multiple times in a row, and even 5 retries may not be enough.

    The only real fix is fixing the npm registry.

  14. I’m not proud but this is working for me while we wait for the PR to be merged:

    RUN for i in 1 2 3; do yarn install && break || sleep 1; done

  15. I’m not proud but this is working for me while we wait for the PR to be merged:

    RUN for i in 1 2 3; do yarn install && break || sleep 1; done

    Until this is properly fixed by Yarn I’ve made a simple NPM module that works around it similarly. It’s easy to tell it to retry e.g. 100 times: yarn-retry --attempts 100. Also it only retries on unexpected end of file error so it will not retry unnecessarily.

Comments are closed.