Efficiently sharing a large node_modules directory between multiple TeamCity build jobs

The CI flow for our Node.js app looks roughly like this:

enter image description here

Currently, this all takes place in a single TeamCity ‘job’ with three ‘steps’ (the Test step runs 4 concurrent child processes).

Problems with the current approach:

  • The whole job takes too long – 15 minutes. (The Test subprocesses run in parallel, but this only shaves about 15% compared to running them serially.)
  • The Test step has jumbled log output from 4 child processes, and it’s painful figuring out what failed.

Desired approach

I want to split the above into six TeamCity jobs, using artifact and/or snapshot dependencies to compose them into the desired flow. This should make better use of our pool of four build agents (better parallelism), and it should make failures easier to pinpoint.

But I’m having trouble with sharing the node_modules from the first step so it can be reused by all four jobs in the Test phase. It takes about 3-5 minutes to run yarn (to set up node_modules), so I want to avoid repeating it on every Test job.

Also, most git pushes don’t actually change the npm dependencies, so the ‘Setup’ phase could often be bypassed for speed. CircleCI has a nice way to do this: it lets you cache your node_modules directory with a custom key such as node_modules.<HASH>, using a hash of your lockfile (yarn.lock or package-lock.json) – because the complete node_modules directory is more or less a function of the lockfile.

But my company won’t let me use CircleCI. We have to use TeamCity.

What I’ve tried:

  • Configuring the first TC job to export node_modules as an artifact, but this seems to take forever on TeamCity (>10 minutes for a large node_modules dir), compared to a few seconds on CircleCI. Also, TC doesn’t make it easy to have a dynamic cache key like Circle does.
  • I’ve tried a custom solution: I save a tarball of node_modules to S3 (with cache key based on lockfile), then each Test job streams it down and untars it into node_modules locally, but this ends up taking just as long as running yarn from scratch on each job, so there’s no point.

I’m stuck. Has anyone had any success setting up a CI flow like this on TeamCity?

How to have SQL Server with Git, TeamCity and Octopus auto-deployment work with schema-level restrictions?

We are starting to integrate our SQL Server workflow with Git, by which I mean generating scripts for all objects and storing them in a Git repository so you can pull, edit, commit, push and apply, rather than just modify objects directly.

One problem we have is that certain objects (in this case, stored procedures) are more important, and we only want them to be altered by certain users (well, actually, certain AD groups which will have one or two users in them). However, to match our company’s existing IT workflow, we were looking at using TeamCity and Octopus to automatically deploy changes to the database when the Git master branch is changes (which will occur via an approved pull request). We use Atlassian Bitbucket for hosting the repos.

Can this object-level restriction work with this Git workflow?

Things I’ve looked at:

  1. Bitbucket can allow default reviewers for certain branches, but you can’t add a group, only individuals.
  2. Even if I use some sort of branch-based permission, I’m not sure how we could enforce branch naming conventions when a developer wants to alter one of these “protected” objects
  3. By using TeamCity and Octopus (in my understanding), no individual developer would have permission to change things in Live, but only the Octopus user has that permission. Thus, I’m not sure if you could have that one user only able to make changes to objects if a certain person approved the appropriate pull request.
  4. We’ve started playing with RedGate’s SQL Source Control, but I think that whether we use this, or GitKraken, or any other source control interface tool, the problem won’t change
  5. Maybe it can’t be done with Octopus, and so we just set the object permissions normally, do our pull requests and all that, but rather than Octopus deploy the changes automatically, after you approve the pull request you then apply the object changes

To give an ideal example workflow I’m imagining:

  1. We have a schema Sales and a schema SalesGoldStandard. Any developer can alter objects in the Sales schema, but they want to alter a SalesGoldStandard stored procedure, so they create a branch called SalesGoldStandard-Issue135-Changes
  2. Change the .sql file and commit their change
  3. Push to the change to Bitbucket and create a pull request
  4. The pull request automatically selects John, in the “Sales Data Custodian” group, based on the branch name
  5. John is happy, approves the pull request, then TeamCity and Octopus do their thing and deploy the change

I can’t find any questions here relating to this situation, or anything about Octopus, Bitbucket, or SQL Source Control that seems to help. I get the feeling it can’t be done this way but maybe someone has solved this problem.

Docker builds on TeamCity cloud agent

I’ve never worked with cloud profiles on TeamCity and have a question regarding the Docker build runner…

I’ve got a TC agent pool running containers an Azure Container Instances. I’d like to use the agents in this pool to run Docker build and push commands with the TC Docker runner but this obviously requires Docker to be installed on the agent.

How have people got round this? Docker in Docker?