In this page I outline the recipe for making a package repository server containing up-to-date packages from
pkg.openindiana.org for use in a LAN or on a host which plans to spawn many local zones. Presence of a local package repository mirror can save some traffic, as well as enables you to do further package installations independently of Internet availability to your host or LAN.
This page is largely based on knowledge published by other authors, including pieces available on OpenIndiana and illumos Wikis, I tried to reference all sources in the end of this page.
Related topics include Building in zones and Using host-only networking to get from build zones and test VMs to the Internet.
One of the papers (see the Links section below) suggests using compression on the package repo dataset, while another says the package files are pre-compressed and there will be no bonus from ZFS compression, only an overhead. Upon my tests with several oi_151 repos (listed below), an 8.2GB set of files got compressed 1.10x (10%) with
gzip-9, so go figure.
Why not enable the compression for WORM data anyway, however, just in case it helps?..
Also at least one document suggests making a sub-dataset for the OpenIndiana "dev" repository, while your other repos can live in other datasets. Makes sense.
Create the compressed hierarchy of datasets for pkg repos:
The OpenIndiana download site provides several assorted tarballs of the package repositories for some of the (intermediate) releases. They can be used to pre-populate a local pkg-repo, and this pkg-repo can be updated by
rsync can be used right away to initiate the repository with all published data (starting from release oi_147); if you prefer this route – scroll down to the chapter "rsync the package repository contents and/or updates".
Archives of some "dev-releases" of the package repository are provided at http://dlc-int.openindiana.org/repos/. They can be downloaded and unpacked to start up a package repository of a particular release.
README file or some other map would be nice
If you want, you can get all of the repos (note that each oi_151 build repo is 2.6Gb in size, and note that the
legacy repo is almost 60Gb). Assuming you have an
/export/pkg/tarballs and it is big enough:
Alternately, look into that resource with a web browser and get just the latest available repository tarball and its checksums, i.e.:
Then, test the received files, like this:
This should output sets of two lines per checksum method per file with identical checksums, like these:
Inside these archives have similar structures topped by variably-named top-level directories like:
Each such directory contains a
publishers/openindiana.org directory with package files, and a
pkg5.repository metadata snippet.
It would take some effort to merge the directory contents into one repository, by moving files from older sources first, then overwriting (as necessary) with newer ones; I did this lazily, with Midnight Commander (
mc). As a result, the
publishers/ sub-directory and
pkg5.repository metadata reside directly in my
The whole repository can be updated (or even fetched right away, skipping the downloadable tarballs part) from the origin servers, but note that your repository would then include revisions for many releases of OpenIndiana (147-151a4 as of now) and take over 8GB of disk in size.
Now that you have repository data, you can create the SMF instance for the package depot server – which would actually server IPS packages to IPS clients (the
pkg(5) program and its relatives).
Following another manual (with additions which I found were needed):
child. That failed on my box (SMF thought the service died after daemonization and spawned new daemons doomed to fail due to busy TCP port; this can also lead to memory leaks in
svc.startd, tracked in bug #2801); the
contractmode takes care of this, seemingly properly.
proxy_base, browser accesses to the port were redirected to "
http://0.0.0.0:10002/..." which did not work
Using a mirror, apparently, should direct the metadata traffic (i.e. checks for current package versions) to the "origin" server if available, while the mirror itself should serve whatever content-data it has.
Note that this is different from setting a preferred publisher with, possibly, different sets or builds of packages.
devrepository. Your system might use another, hence the check. Also note the comment above regarding the paths – until I figure out how to serve relative URLs, the repo is hosted at the HTTP namespace's root.
-Mflag can be used instead of
-m, the other parameters remain on the command-line.
If you want to use a particular package source (publisher and URI) for installing a local zone, that can be arranged too:
Likewise, specific packages (above the predefined minimum) can be requested with
-e option. For more details see:
Also note that the first run may take a while, I guess some things are getting cached (my first local installation took 233 seconds, subsequent ones took ~135sec; install from internet was ~140sec). In my case I rather saved on traffic (installing many zones on many hosts) and internet-independence during the installation than on wallclock time; the system was likely IOPS-bound with its local disk. YMMV.
Also note that you CAN use "
localhost" as the package server name in the call above, and the local zone will be installed by the global zone accessing its
localhost package depot listener. However, package installations and updates with
pkg(5) calls issued from inside the local zone would likely fail, unless you pass accesses to the local zone's
localhost:10002 up to a real package depot server, or install one in the zone. Indeed, this may be what you want to create local zones incapable of auto-updates from inside...
In real life you'd likely clone the first installed "golden" local zone, though, with
zoneadm tools, rather than install many identical zones on the same machine with the complete original procedure. Why waste time and space?
Most of the work was done and described by others, in assorted posts. Links include:
ipkg(5) brand and related manpages: