Compare commits

...

41 Commits

Author SHA1 Message Date
niusmallnan
4fb7f8e3b7 Roll back kernel and os-base because of the zfs issue
kernel: 4.9.80
os-base: v2017.02.10-1
2018-03-27 10:03:35 +08:00
niusmallnan
a09f6ee1e2 Bump rpi kernel to 4.9.80
rpi-bootloader: v20180320-071222
rpi64-kernel: v20180319-130037
(cherry picked from commit 0ac085b273)
2018-03-27 10:01:19 +08:00
niusmallnan
fe5d2dd32b Add kernel.panic parameter for cmdline
(cherry picked from commit 2df6bdcd66)
2018-03-26 10:23:15 +08:00
niusmallnan
b1ed273b64 Support container-crontab for arm64 2018-03-15 17:34:57 +08:00
niusmallnan
ee998fc259 Bump kernel to 4.15.9 2018-03-15 09:23:58 +08:00
niusmallnan
fc17e89393 Also check the label RANCHEROS (#2285) 2018-03-14 18:35:30 +08:00
niusmallnan
4876087067 Merge pull request #2277 from vancluever/b-another-shutdown-fix
cmd/power: Another shutdown command fix
2018-03-12 10:30:58 +08:00
niusmallnan
19a8103eb7 Update dapper build for golang 2018-03-08 14:14:48 +08:00
niusmallnan
c320736b7a Bump os-base to 2018.02-1 2018-03-08 13:56:12 +08:00
niusmallnan
a16c56f7be Get rid of the system-docker-proxy 2018-03-08 10:23:12 +08:00
Chris Marchesi
7d86fa5f8b cmd/power: Another shutdown command fix
It looks like some arguments for shutdown/halt/poweroff have been moved
to a conditional block that works off of how the command was actually
called. However, this value is derived from argv 0, without any sort of
normalization to make sure it matches the relative commands used to
determine how arguments are handled.

This has particular implications when power management commands are
called via absolute commands, as for example in the case of
open-vm-tools which calls /sbin/shutdown -h now specifically when
shutting down a system.

This corrects the situation by passing argv 0 through filepath.Base
before operating on it.
2018-03-06 14:16:44 -08:00
niusmallnan
f6a76a10ae Config ROS image prefix for install and all rancher/os services (#2272) 2018-03-06 18:07:21 +08:00
William Fleurant
d263be4bae globalcfg: reboot 10s after kernel panic fixes #1785 (#1786) 2018-03-05 16:55:01 +08:00
niusmallnan
9c9c3ce141 Fix go test for ssh port and listen_address config 2018-03-05 16:39:30 +08:00
niusmallnan
67961c9349 Support to configure ssh port and listen address 2018-03-05 16:39:30 +08:00
niusmallnan
204011e401 Bump system-docker to 17.06-ros3 2018-03-05 13:32:00 +08:00
niusmallnan
9ced2ba666 Add rancher.resize_device cmdline for DO 2018-03-01 09:35:45 +08:00
niusmallnan
fb2acdb1f0 Update ignore 2018-02-28 15:43:30 +08:00
niusmallnan
34b7ab73c7 Remove import system-docker 2018-02-28 15:43:08 +08:00
niusmallnan
c5f1b28af8 Add SYSTEM_DOCKER_URL env 2018-02-27 23:44:22 +08:00
niusmallnan
43f483a5ef Support higher verion docker-ce as system-docker 2018-02-27 23:44:22 +08:00
niusmallnan
a7ba5d045b Remove system-docker syslink 2018-02-27 23:44:22 +08:00
niusmallnan
48e9314d0c Update os-config for new system-docker 2018-02-27 23:44:22 +08:00
Bill Maxwell
231ece3a9e update container crontab version (#2259)
* update container crontab version

* Format the yaml
2018-02-22 11:36:30 +08:00
niusmallnan
b5ef0f1c4e Bump os-base to 2017.02.10-1 2018-02-13 16:14:13 +08:00
niusmallnan
947049cc3c Use kernel 4.15.2 2018-02-09 20:38:09 +08:00
niusmallnan
4cb3e0fcb7 Update README for v1.2.0 release 2018-02-07 11:35:15 +08:00
niusmallnan
8cda43a68a Reduce the memory consumption at startup (#2247)
Offline image is automatically loaded when the system boots.
When the system memory is not large enough (such as 1G), will lead to
kernel panic.
2018-02-05 17:43:39 +08:00
niusmallnan
22cac7abed Adjust the parameter upgrade-console order (#2246) 2018-02-05 16:22:16 +08:00
niusmallnan
a29eee070b Bump to kernel 4.9.78-rancher2
Fix verbose output for ramdisk info
2018-01-29 13:48:31 +08:00
niusmallnan
d9d48a1905 Fix a typo path for rpi kernel url 2018-01-26 10:32:53 +08:00
niusmallnan
a268907302 GetRetry is used when detecting if url is available (#2237) 2018-01-25 18:01:55 +08:00
niusmallnan
1c2e55ed17 Fixes the following scenario can not reboot (#2236)
1. use ros install
2. use ros os upgrade
2018-01-25 16:25:09 +08:00
Julien Kassar
82aaa413f5 Fix format 'verbs' (#2115) 2018-01-25 09:47:17 +08:00
Julien Kassar
a08ad16a01 Replace Sirupsen/logrus package with rancher/os/log (#2114) 2018-01-24 17:57:02 +08:00
niusmallnan
6bd6f0c43c Fix shutdown -h command (#2234) 2018-01-24 17:53:52 +08:00
niusmallnan
b512a9336a Fix go fmt 2018-01-23 18:09:59 +08:00
niusmallnan
d520ef1a1b Merge pull request #2138 from vancluever/shutdown-reboot-arg-fix
cmd/power: Set correct container name and ensure full command executed
2018-01-23 18:07:57 +08:00
niusmallnan
992142b8ea Merge branch 'master' into shutdown-reboot-arg-fix 2018-01-23 18:01:29 +08:00
niusmallnan
41543d533f Bump to rpi kernel 4.9.76 2018-01-21 20:56:39 +08:00
Chris Marchesi
2f8eaa3314 cmd/power: Set correct container name and ensure full command executed
This fixes a few issues that are preventing shutdown and friends from
behaving correctly:

* The command name, which is being used to determine via what command it
was being called (ie: shutdown, reboot, or halt) was not being parsed
for absolute paths. This was preventing certain logic from being handled
(example: enforcing a static time value of "now" for shutdown), but more
problematically was the fact that it was being used as the container
name being passed to runDocker, the function that launches the
independent shutdown container. This was causing the shutdown container
to fail as something like "/sbin/shutdown" is not a valid name for a
container. The logic to parse out the base command being run is actually
present in runDocker, but does not run if a name is supplied to the
function.

* Further, the command line was not being parsed in the shutdown
container if the name supplied to runDocker was non-empty. Rather, the
full command to run just became the name of the container. Hence,
something like "/sbin/shutdown -h now" became just "shutdown", executing
the default power off behaviour for all actions (including reboots).

* Further to this, open-vm-tools expects "/sbin/shutdown -h now" to be a
valid command to halt the system, which was not being recognized as the
only recognized short-form halt flag in shutdown was its capital version
(-H).

This fixes these three issues by parsing out the base of the called
command before sending it to reboot, using all of os.Argv as the command
line to run regardless of if a name was set for the container or not,
and finally adding the lowercase -h switch to the "shutdown" form of
this command ("halt" is still uppercase only).

Fixes rancher/os#2121.
Fixes rancher/os#2074.
2017-10-20 17:09:13 -07:00
35 changed files with 299 additions and 235 deletions

1
.gitignore vendored
View File

@@ -17,3 +17,4 @@ __pycache__
/.dapper
/.trash-cache
.idea
.trash-conf

View File

@@ -35,7 +35,8 @@ RUN echo "Acquire::http { Proxy \"$APTPROXY\"; };" >> /etc/apt/apt.conf.d/01prox
vim \
wget \
xorriso \
telnet
xz-utils \
telnet
########## Dapper Configuration #####################
@@ -63,7 +64,7 @@ ARG DOCKER_BUILD_VERSION=1.10.3
ARG DOCKER_BUILD_PATCH_VERSION=v${DOCKER_BUILD_VERSION}-ros1
ARG SELINUX_POLICY_URL=https://github.com/rancher/refpolicy/releases/download/v0.0.3/policy.29
ARG KERNEL_VERSION_amd64=4.9.75-rancher
ARG KERNEL_VERSION_amd64=4.9.80-rancher
ARG KERNEL_URL_amd64=https://github.com/rancher/os-kernel/releases/download/v${KERNEL_VERSION_amd64}/linux-${KERNEL_VERSION_amd64}-x86.tar.gz
ARG DOCKER_URL_amd64=https://get.docker.com/builds/Linux/x86_64/docker-${DOCKER_VERSION}.tgz
@@ -78,8 +79,12 @@ ARG OS_SERVICES_REPO=https://raw.githubusercontent.com/${OS_REPO}/os-services
ARG IMAGE_NAME=${OS_REPO}/os
ARG DFS_IMAGE=${OS_REPO}/docker:v${DOCKER_VERSION}-2
ARG OS_BASE_URL_amd64=https://github.com/rancher/os-base/releases/download/v2017.02.9-1/os-base_amd64.tar.xz
ARG OS_BASE_URL_arm64=https://github.com/rancher/os-base/releases/download/v2017.02.9-1/os-base_arm64.tar.xz
ARG OS_BASE_URL_amd64=https://github.com/rancher/os-base/releases/download/v2017.02.10-1/os-base_amd64.tar.xz
ARG OS_BASE_URL_arm64=https://github.com/rancher/os-base/releases/download/v2017.02.10-1/os-base_arm64.tar.xz
ARG SYSTEM_DOCKER_VERSION=17.06-ros3
ARG SYSTEM_DOCKER_URL_amd64=https://github.com/niusmallnan/os-system-docker/releases/download/${SYSTEM_DOCKER_VERSION}/docker-amd64-${SYSTEM_DOCKER_VERSION}.tgz
ARG SYSTEM_DOCKER_URL_arm64=https://github.com/niusmallnan/os-system-docker/releases/download/${SYSTEM_DOCKER_VERSION}/docker-arm64-${SYSTEM_DOCKER_VERSION}.tgz
######################################################
# Set up environment and export all ARGS as ENV
@@ -115,7 +120,10 @@ ENV BUILD_DOCKER_URL=BUILD_DOCKER_URL_${ARCH} \
OS_REPO=${OS_REPO} \
OS_SERVICES_REPO=${OS_SERVICES_REPO} \
REPO_VERSION=master \
SELINUX_POLICY_URL=${SELINUX_POLICY_URL}
SELINUX_POLICY_URL=${SELINUX_POLICY_URL} \
SYSTEM_DOCKER_URL=SYSTEM_DOCKER_URL_${ARCH} \
SYSTEM_DOCKER_URL_amd64=${SYSTEM_DOCKER_URL_amd64} \
SYSTEM_DOCKER_URL_arm64=${SYSTEM_DOCKER_URL_arm64}
ENV PATH=${GOPATH}/bin:/usr/local/go/bin:$PATH
RUN mkdir -p ${DOWNLOADS}
@@ -131,24 +139,13 @@ RUN echo "... Downloading ${!KERNEL_URL}"; \
RUN curl -pfL ${SELINUX_POLICY_URL} > ${DOWNLOADS}/$(basename ${SELINUX_POLICY_URL})
# Install Go
RUN ln -sf go-6 /usr/bin/go && \
curl -sfL https://storage.googleapis.com/golang/go${GO_VERSION}.src.tar.gz | tar -xzf - -C /usr/local && \
cd /usr/local/go/src && \
GOROOT_BOOTSTRAP=/usr GOARCH=${HOST_ARCH} GOHOSTARCH=${HOST_ARCH} ./make.bash && \
rm /usr/bin/go
RUN wget -O - https://storage.googleapis.com/golang/go${GO_VERSION}.linux-${GOARCH}.tar.gz | tar -xzf - -C /usr/local && \
go get github.com/rancher/trash && go get github.com/golang/lint/golint
# Install Host Docker
RUN curl -fL ${!BUILD_DOCKER_URL} > /usr/bin/docker && \
chmod +x /usr/bin/docker
# Install Trash
RUN go get github.com/rancher/trash
# Install golint
RUN go get github.com/golang/lint/golint
RUN go get gopkg.in/check.v1
# Install dapper
RUN curl -sL https://releases.rancher.com/dapper/latest/dapper-`uname -s`-`uname -m | sed 's/arm.*/arm/'` > /usr/bin/dapper && \
chmod +x /usr/bin/dapper

View File

@@ -51,10 +51,10 @@ itest:
qcows:
cp dist/artifacts/rancheros.iso scripts/images/openstack/
cd scripts/images/openstack && \
APPEND="console=tty1 console=ttyS0,115200n8 printk.devkmsg=on rancher.autologin=ttyS0" \
APPEND="console=tty1 console=ttyS0,115200n8 printk.devkmsg=on rancher.autologin=ttyS0 panic=10" \
NAME=openstack ../../../.dapper
cd scripts/images/openstack && \
APPEND="console=tty1 printk.devkmsg=on notsc clocksource=kvm-clock rancher.network.interfaces.eth0.ipv4ll rancher.cloud_init.datasources=[digitalocean] rancher.autologin=tty1 rancher.autologin=ttyS0" \
APPEND="console=tty1 printk.devkmsg=on notsc clocksource=kvm-clock rancher.network.interfaces.eth0.ipv4ll rancher.cloud_init.datasources=[digitalocean] rancher.autologin=tty1 rancher.autologin=ttyS0 panic=10 rancher.resize_device=/dev/vda" \
NAME=digitalocean ../../../.dapper
cp ./scripts/images/openstack/dist/*.img dist/artifacts/

View File

@@ -14,12 +14,12 @@ it would really be bad if somebody did `docker rm -f $(docker ps -qa)` and delet
## Stable Release
**v1.1.3 - Docker 17.06.2-ce - Linux 4.9.75**
**v1.2.0 - Docker 17.09.1-ce - Linux 4.9.78**
### ISO
- https://releases.rancher.com/os/latest/rancheros.iso
- https://releases.rancher.com/os/v1.1.3/rancheros.iso
- https://releases.rancher.com/os/v1.2.0/rancheros.iso
### Additional Downloads
@@ -35,17 +35,24 @@ it would really be bad if somebody did `docker rm -f $(docker ps -qa)` and delet
* https://releases.rancher.com/os/latest/rootfs.tar.gz
* https://releases.rancher.com/os/latest/vmlinuz
#### v1.1.3 Links
#### v1.2.0 Links
* https://releases.rancher.com/os/v1.1.3/initrd
* https://releases.rancher.com/os/v1.1.3/iso-checksums.txt
* https://releases.rancher.com/os/v1.1.3/rancheros-openstack.img
* https://releases.rancher.com/os/v1.1.3/rancheros-digitalocean.img
* https://releases.rancher.com/os/v1.1.3/rancheros-aliyun.vhd
* https://releases.rancher.com/os/v1.1.3/rancheros.ipxe
* https://releases.rancher.com/os/v1.1.3/rancheros-gce.tar.gz
* https://releases.rancher.com/os/v1.1.3/rootfs.tar.gz
* https://releases.rancher.com/os/v1.1.3/vmlinuz
* https://releases.rancher.com/os/v1.2.0/initrd
* https://releases.rancher.com/os/v1.2.0/iso-checksums.txt
* https://releases.rancher.com/os/v1.2.0/rancheros-openstack.img
* https://releases.rancher.com/os/v1.2.0/rancheros-digitalocean.img
* https://releases.rancher.com/os/v1.2.0/rancheros-aliyun.vhd
* https://releases.rancher.com/os/v1.2.0/rancheros.ipxe
* https://releases.rancher.com/os/v1.2.0/rancheros-gce.tar.gz
* https://releases.rancher.com/os/v1.2.0/rootfs.tar.gz
* https://releases.rancher.com/os/v1.2.0/vmlinuz
#### ARM Links
* https://releases.rancher.com/os/latest/rootfs_arm64.tar.gz
* https://releases.rancher.com/os/latest/rancheros-raspberry-pi64.zip
* https://releases.rancher.com/os/v1.2.0/rootfs_arm64.tar.gz
* https://releases.rancher.com/os/v1.2.0/rancheros-raspberry-pi64.zip
**Note**: you can use `http` instead of `https` in the above URLs, e.g. for iPXE.
@@ -57,21 +64,21 @@ SSH keys are added to the **`rancher`** user, so you must log in using the **ran
Region | Type | AMI |
-------|------|------
ap-south-1 | HVM | [ami-74a0f41b](https://ap-south-1.console.aws.amazon.com/ec2/home?region=ap-south-1#launchInstanceWizard:ami=ami-74a0f41b)
eu-west-3 | HVM | [ami-7503b408](https://eu-west-3.console.aws.amazon.com/ec2/home?region=eu-west-3#launchInstanceWizard:ami=ami-7503b408)
eu-west-2 | HVM | [ami-0d938b69](https://eu-west-2.console.aws.amazon.com/ec2/home?region=eu-west-2#launchInstanceWizard:ami=ami-0d938b69)
eu-west-1 | HVM | [ami-be0293c7](https://eu-west-1.console.aws.amazon.com/ec2/home?region=eu-west-1#launchInstanceWizard:ami=ami-be0293c7)
ap-northeast-2 | HVM | [ami-e8af0f86](https://ap-northeast-2.console.aws.amazon.com/ec2/home?region=ap-northeast-2#launchInstanceWizard:ami=ami-e8af0f86)
ap-northeast-1 | HVM | [ami-a873edce](https://ap-northeast-1.console.aws.amazon.com/ec2/home?region=ap-northeast-1#launchInstanceWizard:ami=ami-a873edce)
sa-east-1 | HVM | [ami-c11153ad](https://sa-east-1.console.aws.amazon.com/ec2/home?region=sa-east-1#launchInstanceWizard:ami=ami-c11153ad)
ca-central-1 | HVM | [ami-d3f471b7](https://ca-central-1.console.aws.amazon.com/ec2/home?region=ca-central-1#launchInstanceWizard:ami=ami-d3f471b7)
ap-southeast-1 | HVM | [ami-647e0e18](https://ap-southeast-1.console.aws.amazon.com/ec2/home?region=ap-southeast-1#launchInstanceWizard:ami=ami-647e0e18)
ap-southeast-2 | HVM | [ami-3643b154](https://ap-southeast-2.console.aws.amazon.com/ec2/home?region=ap-southeast-2#launchInstanceWizard:ami=ami-3643b154)
eu-central-1 | HVM | [ami-4c22b123](https://eu-central-1.console.aws.amazon.com/ec2/home?region=eu-central-1#launchInstanceWizard:ami=ami-4c22b123)
us-east-1 | HVM | [ami-72613d08](https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#launchInstanceWizard:ami=ami-72613d08)
us-east-2 | HVM | [ami-67d1fa02](https://us-east-2.console.aws.amazon.com/ec2/home?region=us-east-2#launchInstanceWizard:ami=ami-67d1fa02)
us-west-1 | HVM | [ami-16cdcd76](https://us-west-1.console.aws.amazon.com/ec2/home?region=us-west-1#launchInstanceWizard:ami=ami-16cdcd76)
us-west-2 | HVM | [ami-8916a3f1](https://us-west-2.console.aws.amazon.com/ec2/home?region=us-west-2#launchInstanceWizard:ami=ami-8916a3f1)
ap-south-1 | HVM | [ami-12db887d](https://ap-south-1.console.aws.amazon.com/ec2/home?region=ap-south-1#launchInstanceWizard:ami=ami-12db887d)
eu-west-3 | HVM | [ami-d5a315a8](https://eu-west-3.console.aws.amazon.com/ec2/home?region=eu-west-3#launchInstanceWizard:ami=ami-d5a315a8)
eu-west-2 | HVM | [ami-80bd58e7](https://eu-west-2.console.aws.amazon.com/ec2/home?region=eu-west-2#launchInstanceWizard:ami=ami-80bd58e7)
eu-west-1 | HVM | [ami-69187010](https://eu-west-1.console.aws.amazon.com/ec2/home?region=eu-west-1#launchInstanceWizard:ami=ami-69187010)
ap-northeast-2 | HVM | [ami-57dd7f39](https://ap-northeast-2.console.aws.amazon.com/ec2/home?region=ap-northeast-2#launchInstanceWizard:ami=ami-57dd7f39)
ap-northeast-1 | HVM | [ami-a3c2b5c5](https://ap-northeast-1.console.aws.amazon.com/ec2/home?region=ap-northeast-1#launchInstanceWizard:ami=ami-a3c2b5c5)
sa-east-1 | HVM | [ami-6c2f6100](https://sa-east-1.console.aws.amazon.com/ec2/home?region=sa-east-1#launchInstanceWizard:ami=ami-6c2f6100)
ca-central-1 | HVM | [ami-b8a622dc](https://ca-central-1.console.aws.amazon.com/ec2/home?region=ca-central-1#launchInstanceWizard:ami=ami-b8a622dc)
ap-southeast-1 | HVM | [ami-0f5a1b73](https://ap-southeast-1.console.aws.amazon.com/ec2/home?region=ap-southeast-1#launchInstanceWizard:ami=ami-0f5a1b73)
ap-southeast-2 | HVM | [ami-edc73c8f](https://ap-southeast-2.console.aws.amazon.com/ec2/home?region=ap-southeast-2#launchInstanceWizard:ami=ami-edc73c8f)
eu-central-1 | HVM | [ami-28422647](https://eu-central-1.console.aws.amazon.com/ec2/home?region=eu-central-1#launchInstanceWizard:ami=ami-28422647)
us-east-1 | HVM | [ami-a7151cdd](https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#launchInstanceWizard:ami=ami-a7151cdd)
us-east-2 | HVM | [ami-a383b6c6](https://us-east-2.console.aws.amazon.com/ec2/home?region=us-east-2#launchInstanceWizard:ami=ami-a383b6c6)
us-west-1 | HVM | [ami-c4b3bca4](https://us-west-1.console.aws.amazon.com/ec2/home?region=us-west-1#launchInstanceWizard:ami=ami-c4b3bca4)
us-west-2 | HVM | [ami-6e1a9e16](https://us-west-2.console.aws.amazon.com/ec2/home?region=us-west-2#launchInstanceWizard:ami=ami-6e1a9e16)
Additionally, images are available with support for Amazon EC2 Container Service (ECS) [here](https://docs.rancher.com/os/amazon-ecs/#amazon-ecs-enabled-amis).
@@ -79,7 +86,7 @@ Additionally, images are available with support for Amazon EC2 Container Service
We are providing a disk image that users can download and import for use in Google Compute Engine. The image can be obtained from the release artifacts for RancherOS.
[Download Image](https://releases.rancher.com/os/v1.1.3/rancheros-gce.tar.gz)
[Download Image](https://releases.rancher.com/os/v1.2.0/rancheros-gce.tar.gz)
Please follow the directions at our [docs to launch in GCE](http://docs.rancher.com/os/running-rancheros/cloud/gce/).

View File

@@ -86,7 +86,7 @@ func consoleInitFunc() error {
log.Error(err)
}
if err := modifySshdConfig(); err != nil {
if err := modifySshdConfig(cfg); err != nil {
log.Error(err)
}
@@ -242,19 +242,28 @@ func writeRespawn(user string, sshd, recovery bool) error {
return ioutil.WriteFile("/etc/respawn.conf", []byte(respawn), 0644)
}
func modifySshdConfig() error {
func modifySshdConfig(cfg *config.CloudConfig) error {
sshdConfig, err := ioutil.ReadFile("/etc/ssh/sshd_config")
if err != nil {
return err
}
sshdConfigString := string(sshdConfig)
for _, item := range []string{
modifiedLines := []string{
"UseDNS no",
"PermitRootLogin no",
"ServerKeyBits 2048",
"AllowGroups docker",
} {
}
if cfg.Rancher.SSH.Port > 0 && cfg.Rancher.SSH.Port < 65355 {
modifiedLines = append(modifiedLines, fmt.Sprintf("Port %d", cfg.Rancher.SSH.Port))
}
if cfg.Rancher.SSH.ListenAddress != "" {
modifiedLines = append(modifiedLines, fmt.Sprintf("ListenAddress %s", cfg.Rancher.SSH.ListenAddress))
}
for _, item := range modifiedLines {
match, err := regexp.Match("^"+item, sshdConfig)
if err != nil {
return err

View File

@@ -85,7 +85,6 @@ func setupCommandSymlinks() {
{config.RosBin, "/usr/bin/cloud-init-save"},
{config.RosBin, "/usr/bin/dockerlaunch"},
{config.RosBin, "/usr/bin/respawn"},
{config.RosBin, "/usr/bin/system-docker"},
{config.RosBin, "/usr/sbin/netconf"},
{config.RosBin, "/usr/sbin/wait-for-docker"},
{config.RosBin, "/usr/sbin/poweroff"},

View File

@@ -121,7 +121,11 @@ func installAction(c *cli.Context) error {
image := c.String("image")
cfg := config.LoadConfig()
if image == "" {
image = cfg.Rancher.Upgrade.Image + ":" + config.Version + config.Suffix
image = fmt.Sprintf("%s:%s%s",
cfg.Rancher.Upgrade.Image,
config.Version,
config.Suffix)
image = formatImage(image, cfg)
}
installType := c.String("install-type")
@@ -202,7 +206,7 @@ func runInstall(image, installType, cloudConfig, device, partition, statedir, ka
}
// Versions before 0.8.0-rc3 use the old calling convention (from the lay-down-os shell script)
imageVersion := strings.TrimPrefix(image, "rancher/os:")
imageVersion := strings.Split(image, ":")[1]
if version.GreaterThan("v0.8.0-rc3", imageVersion) {
log.Infof("user specified to install pre v0.8.0: %s", image)
imageVersion = strings.Replace(imageVersion, "-", ".", -1)
@@ -230,11 +234,11 @@ func runInstall(image, installType, cloudConfig, device, partition, statedir, ka
}
}
if _, err := os.Stat("/usr/bin/system-docker"); os.IsNotExist(err) {
if err := os.Symlink("/usr/bin/ros", "/usr/bin/system-docker"); err != nil {
log.Errorf("ln error %s", err)
}
}
//if _, err := os.Stat("/usr/bin/system-docker"); os.IsNotExist(err) {
//if err := os.Symlink("/usr/bin/ros", "/usr/bin/system-docker"); err != nil {
//log.Errorf("ln error %s", err)
//}
//}
useIso := false
// --isoinstallerloaded is used if the ros has created the installer container from and image that was on the booted iso
@@ -256,7 +260,7 @@ func runInstall(image, installType, cloudConfig, device, partition, statedir, ka
cmd := exec.Command("system-docker", "load", "-i", "/bootiso/rancheros/installer.tar.gz")
cmd.Stdout, cmd.Stderr = os.Stdout, os.Stderr
if err := cmd.Run(); err != nil {
log.Infof("failed to load images from /bootiso/rancheros: %s", err)
log.Infof("failed to load images from /bootiso/rancheros: %v", err)
} else {
log.Infof("Loaded images from /bootiso/rancheros/installer.tar.gz")
@@ -376,20 +380,32 @@ func runInstall(image, installType, cloudConfig, device, partition, statedir, ka
return nil
}
func getDeviceByLabel(label string) (string, string) {
d, t, err := util.Blkid(label)
if err != nil {
log.Warnf("Failed to run blkid for %s", label)
return "", ""
}
return d, t
}
func getBootIso() (string, string, error) {
deviceName := "/dev/sr0"
deviceType := "iso9660"
d, t, err := util.Blkid("RancherOS")
if err != nil {
return "", "", errors.Wrap(err, "Failed to run blkid")
}
if d != "" {
deviceName = d
deviceType = t
// Our ISO LABEL is RancherOS
// But some tools(like rufus) will change LABEL to RANCHEROS
for _, label := range []string{"RancherOS", "RANCHEROS"} {
d, t := getDeviceByLabel(label)
if d != "" {
deviceName = d
deviceType = t
continue
}
}
// Check the sr deive if exist
if _, err = os.Stat(deviceName); os.IsNotExist(err) {
if _, err := os.Stat(deviceName); os.IsNotExist(err) {
return "", "", err
}
@@ -435,7 +451,7 @@ func layDownOS(image, installType, cloudConfig, device, partition, statedir, kap
//cloudConfig := SCRIPTS_DIR + "/conf/empty.yml" //${cloudConfig:-"${SCRIPTS_DIR}/conf/empty.yml"}
CONSOLE := "tty0"
baseName := "/mnt/new_img"
kernelArgs := "printk.devkmsg=on rancher.state.dev=LABEL=RANCHER_STATE rancher.state.wait" // console="+CONSOLE
kernelArgs := "printk.devkmsg=on rancher.state.dev=LABEL=RANCHER_STATE rancher.state.wait panic=10" // console="+CONSOLE
if statedir != "" {
kernelArgs = kernelArgs + " rancher.state.directory=" + statedir
}

View File

@@ -117,7 +117,18 @@ func getImages() (*Images, error) {
}
}
return parseBody(body)
images, err := parseBody(body)
if err != nil {
return nil, err
}
cfg := config.LoadConfig()
images.Current = formatImage(images.Current, cfg)
for i := len(images.Available) - 1; i >= 0; i-- {
images.Available[i] = formatImage(images.Available[i], cfg)
}
return images, nil
}
func osMetaDataGet(c *cli.Context) error {
@@ -133,6 +144,7 @@ func osMetaDataGet(c *cli.Context) error {
cfg := config.LoadConfig()
runningName := cfg.Rancher.Upgrade.Image + ":" + config.Version
runningName = formatImage(runningName, cfg)
foundRunning := false
for i := len(images.Available) - 1; i >= 0; i-- {
@@ -210,7 +222,7 @@ func osVersion(c *cli.Context) error {
return nil
}
func startUpgradeContainer(image string, stage, force, reboot, kexec, debug bool, upgradeConsole bool, kernelArgs string) error {
func startUpgradeContainer(image string, stage, force, reboot, kexec, upgradeConsole, debug bool, kernelArgs string) error {
command := []string{
"-t", "rancher-upgrade",
"-r", config.Version,

View File

@@ -5,8 +5,8 @@ import (
"os/exec"
"syscall"
log "github.com/Sirupsen/logrus"
"github.com/codegangsta/cli"
"github.com/rancher/os/log"
)
func recoveryInitAction(c *cli.Context) error {

View File

@@ -175,14 +175,14 @@ func startDocker(cfg *config.CloudConfig) error {
return err
}
cmd := []string{"docker-runc", "exec", "--", info.ID, "env"}
cmd := []string{"system-docker-runc", "exec", "--", info.ID, "env"}
log.Info(dockerCfg.AppendEnv())
cmd = append(cmd, dockerCfg.AppendEnv()...)
cmd = append(cmd, dockerCommand...)
cmd = append(cmd, args...)
log.Infof("Running %v", cmd)
return syscall.Exec("/usr/bin/ros", cmd, os.Environ())
return syscall.Exec("/usr/bin/system-docker-runc", cmd, os.Environ())
}
func waitForPid(service string, project *project.Project) (int, error) {

View File

@@ -6,6 +6,7 @@ import (
"os"
"strings"
"github.com/rancher/os/config"
"github.com/rancher/os/log"
)
@@ -19,3 +20,11 @@ func yes(question string) bool {
return strings.ToLower(line[0:1]) == "y"
}
func formatImage(image string, cfg *config.CloudConfig) string {
domainRegistry := cfg.Rancher.Environment["REGISTRY_DOMAIN"]
if domainRegistry != "docker.io" && domainRegistry != "" {
return fmt.Sprintf("%s/%s", domainRegistry, image)
}
return image
}

View File

@@ -70,7 +70,7 @@ func ApplyNetworkConfig(cfg *config.CloudConfig) {
}
resolve, err := ioutil.ReadFile("/etc/resolv.conf")
log.Debugf("Resolve.conf == [%s], %s", resolve, err)
log.Debugf("Resolve.conf == [%s], %v", resolve, err)
log.Infof("Apply Network Config SyncHostname")
if err := hostname.SyncHostname(); err != nil {

View File

@@ -38,14 +38,15 @@ func runDocker(name string) error {
return err
}
cmd := []string{name}
containerName := strings.TrimPrefix(strings.Join(strings.Split(name, "/"), "-"), "-")
cmd := os.Args
log.Debugf("runDocker cmd: %s", cmd)
if name == "" {
name = filepath.Base(os.Args[0])
cmd = os.Args
}
containerName := strings.TrimPrefix(strings.Join(strings.Split(name, "/"), "-"), "-")
existing, err := client.ContainerInspect(context.Background(), containerName)
if err == nil && existing.ID != "" {
err := client.ContainerRemove(context.Background(), types.ContainerRemoveOptions{

View File

@@ -27,7 +27,7 @@ func Shutdown() {
log.InitLogger()
app := cli.NewApp()
app.Name = os.Args[0]
app.Name = filepath.Base(os.Args[0])
app.Usage = fmt.Sprintf("%s RancherOS\nbuilt: %s", app.Name, config.BuildDate)
app.Version = config.Version
app.Author = "Rancher Labs, Inc."
@@ -94,13 +94,22 @@ func Shutdown() {
if app.Name == "poweroff" {
app.Flags = append(app.Flags, cli.BoolTFlag{
Name: "P, poweroff",
Usage: "halt the machine",
Usage: "poweroff the machine",
Destination: &poweroffFlag,
})
} else {
// shutdown -h
// Equivalent to --poweroff
if app.Name == "shutdown" {
app.Flags = append(app.Flags, cli.BoolFlag{
Name: "h",
Usage: "poweroff the machine",
Destination: &poweroffFlag,
})
}
app.Flags = append(app.Flags, cli.BoolFlag{
Name: "P, poweroff",
Usage: "halt the machine",
Usage: "poweroff the machine",
Destination: &poweroffFlag,
})
}
@@ -181,6 +190,7 @@ func Kexec(previous bool, bootDir, append string) error {
// Reboot is used by installation / upgrade
// TODO: add kexec option
func Reboot() {
os.Args = []string{"reboot"}
reboot("reboot", false, syscall.LINUX_REBOOT_CMD_RESTART)
}
@@ -197,7 +207,12 @@ func shutdown(c *cli.Context) error {
}
timeArg := c.Args().Get(0)
if c.App.Name == "shutdown" && timeArg != "" {
// We may be called via an absolute path, so check that now and make sure we
// don't pass the wrong app name down. Aside from the logic in the immediate
// context here, the container name is derived from how we were called and
// cannot contain slashes.
appName := filepath.Base(c.App.Name)
if appName == "shutdown" && timeArg != "" {
if timeArg != "now" {
err := fmt.Errorf("Sorry, can't parse '%s' as time value (only 'now' supported)", timeArg)
log.Error(err)
@@ -206,7 +221,7 @@ func shutdown(c *cli.Context) error {
// TODO: if there are more params, LOG them
}
reboot(c.App.Name, forceFlag, powerCmd)
reboot(appName, forceFlag, powerCmd)
return nil
}

View File

@@ -1,17 +1,18 @@
package sysinit
import (
initPkg "github.com/rancher/os/init"
"github.com/rancher/os/log"
"io/ioutil"
"os"
initPkg "github.com/rancher/os/init"
"github.com/rancher/os/log"
)
func Main() {
log.InitLogger()
resolve, err := ioutil.ReadFile("/etc/resolv.conf")
log.Infof("2Resolv.conf == [%s], %s", resolve, err)
log.Infof("Resolv.conf == [%s], %v", resolve, err)
log.Infof("Exec %v", os.Args)
if err := initPkg.SysInit(); err != nil {

View File

@@ -1,23 +0,0 @@
package systemdocker
import (
"os"
"github.com/docker/docker/docker"
"github.com/rancher/os/config"
"github.com/rancher/os/log"
)
func Main() {
log.SetLevel(log.DebugLevel)
if os.Geteuid() != 0 {
log.Fatalf("%s: Need to be root", os.Args[0])
}
if os.Getenv("DOCKER_HOST") == "" {
os.Setenv("DOCKER_HOST", config.SystemDockerHost)
}
docker.RancherOSMain()
}

View File

@@ -34,7 +34,7 @@ func NewDatasource(url string) *RemoteFile {
func (f *RemoteFile) IsAvailable() bool {
network.SetProxyEnvironmentVariables()
client := pkg.NewHTTPClient()
_, f.lastError = client.Get(f.url)
_, f.lastError = client.GetRetry(f.url)
return (f.lastError == nil)
}

View File

@@ -26,7 +26,7 @@ func ChainCfgFuncs(cfg *CloudConfig, cfgFuncs CfgFuncs) (*CloudConfig, error) {
}
var err error
if cfg, err = cfgFunc(cfg); err != nil {
log.Errorf("Failed [%d/%d] %s: %s", i, len, name, err)
log.Errorf("Failed [%d/%d] %s: %v", i, len, name, err)
return cfg, err
}
log.Debugf("[%d/%d] Done %s", i, len, name)

View File

@@ -8,7 +8,7 @@ import (
)
func (d *DockerConfig) FullArgs() []string {
args := []string{"daemon"}
args := []string{}
args = append(args, generateEngineOptsSlice(d.EngineOpts)...)
args = append(args, d.ExtraArgs...)
if d.TLS {

View File

@@ -148,7 +148,9 @@ var schema = `{
"properties": {
"keys": {"type": "object"},
"daemon": {"type": "boolean"}
"daemon": {"type": "boolean"},
"port": {"type": "integer"},
"listen_address": {"type": "string"}
}
},

View File

@@ -25,7 +25,7 @@ const (
ModulesArchive = "/modules.tar"
Debug = false
SystemDockerLog = "/var/log/system-docker.log"
SystemDockerBin = "/usr/bin/system-docker"
SystemDockerBin = "/usr/bin/system-dockerd"
HashLabel = "io.rancher.os.hash"
IDLabel = "io.rancher.os.id"
@@ -182,8 +182,10 @@ type DockerConfig struct {
}
type SSHConfig struct {
Keys map[string]string `yaml:"keys,omitempty"`
Daemon bool `yaml:"daemon,omitempty"`
Keys map[string]string `yaml:"keys,omitempty"`
Daemon bool `yaml:"daemon,omitempty"`
Port int `yaml:"port,omitempty"`
ListenAddress string `yaml:"listen_address,omitempty"`
}
type StateConfig struct {

View File

@@ -16,6 +16,7 @@ func testValidate(t *testing.T, cfg []byte, contains string) {
t.Fatal(err)
}
if contains == "" && len(validationErrors.Errors()) != 0 {
fmt.Printf("validationErrors: %v", validationErrors.Errors())
t.Fail()
}
if !strings.Contains(fmt.Sprint(validationErrors.Errors()), contains) {

View File

@@ -75,7 +75,7 @@ func createOptionalMounts(mounts ...[]string) {
log.Debugf("Mounting %s %s %s %s", mount[0], mount[1], mount[2], mount[3])
err := util.Mount(mount[0], mount[1], mount[2], mount[3])
if err != nil {
log.Debugf("Unable to mount %s %s %s %s: %s", mount[0], mount[1], mount[2], mount[3], err)
log.Debugf("Unable to mount %s %s %s %s: %v", mount[0], mount[1], mount[2], mount[3], err)
}
}
}
@@ -354,7 +354,7 @@ ff02::2 ip6-allrouters
if len(cfg.DNSConfig.Nameservers) != 0 {
resolve, err := ioutil.ReadFile("/etc/resolv.conf")
log.Debugf("Resolve.conf == [%s], err", resolve, err)
log.Debugf("Resolve.conf == [%s], %v", resolve, err)
if err != nil {
log.Infof("scratch Writing empty resolv.conf (%v) %v", []string{}, []string{})

View File

@@ -55,6 +55,9 @@ func (s *Service) missingImage() bool {
}
client := s.context.ClientFactory.Create(s)
_, _, err := client.ImageInspectWithRaw(context.Background(), image, false)
if err != nil {
log.Errorf("Missing the image: %v", err)
}
return err != nil
}

View File

@@ -252,7 +252,7 @@ func RunInit() error {
config.SaveInitCmdline(cmdLineArgs)
cfg := config.LoadConfig()
log.Debugf("Cmdline debug = %s", cfg.Rancher.Debug)
log.Debugf("Cmdline debug = %t", cfg.Rancher.Debug)
if cfg.Rancher.Debug {
log.SetLevel(log.DebugLevel)
} else {

View File

@@ -1,11 +1,11 @@
package init
import (
log "github.com/Sirupsen/logrus"
composeConfig "github.com/docker/libcompose/config"
"github.com/docker/libcompose/yaml"
"github.com/rancher/os/compose"
"github.com/rancher/os/config"
"github.com/rancher/os/log"
"github.com/rancher/os/netconf"
)

View File

@@ -2,11 +2,13 @@ package init
import (
"os"
"os/exec"
"path"
"syscall"
"golang.org/x/net/context"
"github.com/docker/engine-api/types"
"github.com/docker/libcompose/project/options"
"github.com/rancher/os/cmd/control"
"github.com/rancher/os/compose"
@@ -74,22 +76,22 @@ func loadImages(cfg *config.CloudConfig) (*config.CloudConfig, error) {
continue
}
// client.ImageLoad is an asynchronous operation
// To ensure the order of execution, use cmd instead of it
inputFileName := path.Join(config.ImagesPath, image)
input, err := os.Open(inputFileName)
if err != nil {
return cfg, err
}
defer input.Close()
log.Infof("Loading images from %s", inputFileName)
if _, err = client.ImageLoad(context.Background(), input, true); err != nil {
if err = exec.Command("/usr/bin/system-docker", "load", "-q", "-i", inputFileName).Run(); err != nil {
log.Fatalf("FATAL: failed loading images from %s: %s", inputFileName, err)
}
log.Infof("Done loading images from %s", inputFileName)
}
dockerImages, _ := client.ImageList(context.Background(), types.ImageListOptions{})
for _, dimg := range dockerImages {
log.Infof("Got image repo tags: %s", dimg.RepoTags)
}
return cfg, nil
}

View File

@@ -64,7 +64,7 @@ func (hook *ShowuserlogHook) NotUsedYetLogSystemReady() error {
if hook.syslogHook == nil {
h, err := logrus_syslog.NewSyslogHook("", "", syslog.LOG_INFO, "")
if err != nil {
logrus.Debugf("error creating SyslogHook: %s", err)
logrus.Debugf("error creating SyslogHook: %v", err)
return err
}
hook.syslogHook = h

View File

@@ -16,7 +16,6 @@ import (
"github.com/rancher/os/cmd/power"
"github.com/rancher/os/cmd/respawn"
"github.com/rancher/os/cmd/sysinit"
"github.com/rancher/os/cmd/systemdocker"
"github.com/rancher/os/cmd/wait"
"github.com/rancher/os/dfs"
osInit "github.com/rancher/os/init"
@@ -35,7 +34,6 @@ var entrypoints = map[string]func(){
"recovery": control.AutologinMain,
"ros-bootstrap": control.BootstrapMain,
"ros-sysinit": sysinit.Main,
"system-docker": systemdocker.Main,
"wait-for-docker": wait.Main,
"cni-glue": glue.Main,
"bridge": bridge.Main,

View File

@@ -3,6 +3,7 @@ rancher:
environment:
VERSION: {{.VERSION}}
SUFFIX: {{.SUFFIX}}
REGISTRY_DOMAIN: "docker.io"
defaults:
hostname: {{.HOSTNAME_DEFAULT}}
{{if eq "amd64" .ARCH -}}
@@ -64,7 +65,7 @@ rancher:
- /var/log:/var/log
bootstrap_docker:
bridge: none
storage_driver: overlay
storage_driver: overlay2
restart: false
graph: /var/lib/system-docker
group: root
@@ -84,19 +85,86 @@ rancher:
sysctl:
fs.file-max: 1000000000
services:
{{if eq "amd64" .ARCH -}}
acpid:
image: {{.OS_REPO}}/os-acpid:{{.VERSION}}{{.SUFFIX}}
command: /usr/sbin/acpid -f
command-volumes:
image: {{.OS_REPO}}/os-base:{{.VERSION}}{{.SUFFIX}}
command: echo
labels:
io.rancher.os.createonly: "true"
io.rancher.os.scope: system
net: host
uts: host
log_driver: json-file
net: none
privileged: true
volumes_from:
- command-volumes
- system-volumes
{{end -}}
read_only: true
volumes:
- /usr/bin/ros:/usr/bin/ros:ro
- /usr/bin/system-docker:/usr/bin/system-docker:ro
- /usr/bin/system-docker-runc:/usr/bin/system-docker-runc:ro
system-volumes:
image: {{.OS_REPO}}/os-base:{{.VERSION}}{{.SUFFIX}}
command: echo
labels:
io.rancher.os.createonly: "true"
io.rancher.os.scope: system
log_driver: json-file
net: none
privileged: true
read_only: true
volumes:
- /dev:/host/dev
- /etc/docker:/etc/docker
- /etc/hosts:/etc/hosts
- /etc/logrotate.d:/etc/logrotate.d
- /etc/resolv.conf:/etc/resolv.conf
- /etc/ssl/certs/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt.rancher
- /etc/selinux:/etc/selinux
- /lib/firmware:/lib/firmware
- /lib/modules:/lib/modules
- /run:/run
- /usr/share/ros:/usr/share/ros
- /var/lib/rancher/cache:/var/lib/rancher/cache
- /var/lib/rancher/conf:/var/lib/rancher/conf
- /var/lib/rancher:/var/lib/rancher
- /var/log:/var/log
- /var/run:/var/run
container-data-volumes:
image: {{.OS_REPO}}/os-base:{{.VERSION}}{{.SUFFIX}}
command: echo
labels:
io.rancher.os.createonly: "true"
io.rancher.os.scope: system
log_driver: json-file
net: none
privileged: true
read_only: true
volumes:
- /var/lib/docker:/var/lib/docker
user-volumes:
image: {{.OS_REPO}}/os-base:{{.VERSION}}{{.SUFFIX}}
command: echo
labels:
io.rancher.os.createonly: "true"
io.rancher.os.scope: system
log_driver: json-file
net: none
privileged: true
read_only: true
volumes:
- /home:/home
- /opt:/opt
- /var/lib/kubelet:/var/lib/kubelet
media-volumes:
image: {{.OS_REPO}}/os-base:{{.VERSION}}{{.SUFFIX}}
command: echo
labels:
io.rancher.os.createonly: "true"
io.rancher.os.scope: system
log_driver: json-file
net: none
privileged: true
read_only: true
volumes:
- /media:/media:shared
- /mnt:/mnt:shared
all-volumes:
image: {{.OS_REPO}}/os-base:{{.VERSION}}{{.SUFFIX}}
command: echo
@@ -114,6 +182,19 @@ rancher:
- media-volumes
- user-volumes
- system-volumes
{{if eq "amd64" .ARCH -}}
acpid:
image: {{.OS_REPO}}/os-acpid:{{.VERSION}}{{.SUFFIX}}
command: /usr/sbin/acpid -f
labels:
io.rancher.os.scope: system
net: host
uts: host
privileged: true
volumes_from:
- command-volumes
- system-volumes
{{end -}}
cloud-init-execute:
image: {{.OS_REPO}}/os-base:{{.VERSION}}{{.SUFFIX}}
command: cloud-init-execute -pre-console
@@ -127,18 +208,6 @@ rancher:
volumes_from:
- system-volumes
volumes:
- /usr/bin/ros:/usr/bin/ros
command-volumes:
image: {{.OS_REPO}}/os-base:{{.VERSION}}{{.SUFFIX}}
command: echo
labels:
io.rancher.os.createonly: "true"
io.rancher.os.scope: system
log_driver: json-file
net: none
privileged: true
read_only: true
volumes:
- /usr/bin/ros:/usr/bin/ros:ro
console:
image: {{.OS_REPO}}/os-console:{{.VERSION}}{{.SUFFIX}}
@@ -162,18 +231,6 @@ rancher:
- all-volumes
volumes:
- /usr/bin/iptables:/sbin/iptables:ro
container-data-volumes:
image: {{.OS_REPO}}/os-base:{{.VERSION}}{{.SUFFIX}}
command: echo
labels:
io.rancher.os.createonly: "true"
io.rancher.os.scope: system
log_driver: json-file
net: none
privileged: true
read_only: true
volumes:
- /var/lib/docker:/var/lib/docker
logrotate:
image: {{.OS_REPO}}/os-logrotate:{{.VERSION}}{{.SUFFIX}}
command: /usr/sbin/logrotate -v /etc/logrotate.conf
@@ -188,19 +245,6 @@ rancher:
volumes_from:
- command-volumes
- system-volumes
media-volumes:
image: {{.OS_REPO}}/os-base:{{.VERSION}}{{.SUFFIX}}
command: echo
labels:
io.rancher.os.createonly: "true"
io.rancher.os.scope: system
log_driver: json-file
net: none
privileged: true
read_only: true
volumes:
- /media:/media:shared
- /mnt:/mnt:shared
network:
image: {{.OS_REPO}}/os-base:{{.VERSION}}{{.SUFFIX}}
command: netconf
@@ -213,8 +257,8 @@ rancher:
pid: host
privileged: true
volumes_from:
- command-volumes
- system-volumes
- command-volumes
volumes:
- /usr/bin/iptables:/sbin/iptables:ro
ntp:
@@ -255,7 +299,11 @@ rancher:
- command-volumes
- system-volumes
system-cron:
image: rancher/container-crontab:v0.1.0
{{if eq "amd64" .ARCH -}}
image: rancher/container-crontab:v0.4.0
{{else -}}
image: niusmallnan/container-crontab:v0.4.0{{.SUFFIX}}
{{end -}}
labels:
io.rancher.os.scope: system
uts: host
@@ -263,34 +311,9 @@ rancher:
privileged: true
restart: always
volumes:
- /var/run/system-docker.sock:/var/run/docker.sock
system-volumes:
image: {{.OS_REPO}}/os-base:{{.VERSION}}{{.SUFFIX}}
command: echo
labels:
io.rancher.os.createonly: "true"
io.rancher.os.scope: system
log_driver: json-file
net: none
privileged: true
read_only: true
volumes:
- /dev:/host/dev
- /etc/docker:/etc/docker
- /etc/hosts:/etc/hosts
- /etc/logrotate.d:/etc/logrotate.d
- /etc/resolv.conf:/etc/resolv.conf
- /etc/ssl/certs/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt.rancher
- /etc/selinux:/etc/selinux
- /lib/firmware:/lib/firmware
- /lib/modules:/lib/modules
- /run:/run
- /usr/share/ros:/usr/share/ros
- /var/lib/rancher/cache:/var/lib/rancher/cache
- /var/lib/rancher/conf:/var/lib/rancher/conf
- /var/lib/rancher:/var/lib/rancher
- /var/log:/var/log
- /var/run:/var/run
- /var/run/system-docker.sock:/var/run/docker.sock
environment:
DOCKER_API_VERSION: "1.22"
udev-cold:
image: {{.OS_REPO}}/os-base:{{.VERSION}}{{.SUFFIX}}
command: ros udev-settle
@@ -317,20 +340,6 @@ rancher:
volumes_from:
- command-volumes
- system-volumes
user-volumes:
image: {{.OS_REPO}}/os-base:{{.VERSION}}{{.SUFFIX}}
command: echo
labels:
io.rancher.os.createonly: "true"
io.rancher.os.scope: system
log_driver: json-file
net: none
privileged: true
read_only: true
volumes:
- /home:/home
- /opt:/opt
- /var/lib/kubelet:/var/lib/kubelet
docker:
{{if eq "amd64" .ARCH -}}
image: {{.OS_REPO}}/os-docker:17.09.1{{.SUFFIX}}
@@ -358,7 +367,8 @@ rancher:
- /var/lib/system-docker:/var/lib/system-docker:shared
system_docker:
exec: true
storage_driver: overlay
storage_driver: overlay2
bridge: none
restart: false
graph: /var/lib/system-docker
group: root

View File

@@ -16,5 +16,5 @@ OUTPUT=${OUTPUT:-bin/ros}
echo Building $OUTPUT
BUILDDATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ')
CONST="-X github.com/docker/docker/dockerversion.GitCommit=${COMMIT} -X github.com/docker/docker/dockerversion.Version=${DOCKER_PATCH_VERSION} -X github.com/docker/docker/dockerversion.BuildTime='${BUILDDATE}' -X github.com/docker/docker/dockerversion.IAmStatic=true -X github.com/rancher/os/config.Version=${VERSION} -X github.com/rancher/os/config.OsRepo=${OS_REPO} -X github.com/rancher/os/config.BuildDate='${BUILDDATE}'"
CONST="-X github.com/rancher/os/config.Version=${VERSION} -X github.com/rancher/os/config.OsRepo=${OS_REPO} -X github.com/rancher/os/config.BuildDate='${BUILDDATE}'"
go build -tags "selinux cgo daemon netgo" -installsuffix netgo -ldflags "$CONST -linkmode external -extldflags -static -s -w" -o ${OUTPUT}

View File

@@ -1,2 +1 @@
APPEND rancher.autologin=tty1 rancher.autologin=ttyS0 rancher.autologin=ttyS1 rancher.autologin=ttyS1 console=tty1 console=ttyS0 console=ttyS1 printk.devkmsg=on ${APPEND}
APPEND rancher.autologin=tty1 rancher.autologin=ttyS0 rancher.autologin=ttyS1 rancher.autologin=ttyS1 console=tty1 console=ttyS0 console=ttyS1 printk.devkmsg=on panic=10 ${APPEND}

View File

@@ -13,11 +13,11 @@ RUN mkdir -p /source/assets
COPY rootfs_arm64.tar.gz /source/assets/rootfs_arm64.tar.gz
ENV URL=https://github.com/DieterReuter/rpi64-kernel/releases/download
ENV VER=v20171216-172054
ENV VER=v20180319-130037
RUN curl -fL ${URL}/${VER}/4.9.69-hypriotos-v8.tar.gz > /source/assets/kernel.tar.gz
RUN curl -fL ${URL}/${VER}/4.9.80-hypriotos-v8.tar.gz > /source/assets/kernel.tar.gz
RUN curl -fL ${URL}/${VER}/bootfiles.tar.gz > /source/assets/bootfiles.tar.gz
RUN curl -fL https://github.com/DieterReuter/rpi-bootloader/releases/download/v20171216-171651/rpi-bootloader.tar.gz > /source/assets/rpi-bootfiles.tar.gz
RUN curl -fL https://github.com/DieterReuter/rpi-bootloader/releases/download/v20180320-071222/rpi-bootloader.tar.gz > /source/assets/rpi-bootfiles.tar.gz
#ENV RPI_URL=https://github.com/raspberrypi/firmware/raw/master/boot
#RUN curl -fL ${RPI_URL}/bootcode.bin > /source/assets/bootcode.bin

View File

@@ -13,11 +13,13 @@ cp bin/ros ${INITRD_DIR}/usr/bin/
ln -s usr/bin/ros ${INITRD_DIR}/init
ln -s bin ${INITRD_DIR}/usr/sbin
ln -s usr/sbin ${INITRD_DIR}/sbin
ln -s ros ${INITRD_DIR}/usr/bin/system-docker
ln -s ros ${INITRD_DIR}/usr/bin/docker-runc
ln -s ../../../../usr/bin/ros ${INITRD_DIR}/usr/var/lib/cni/bin/bridge
ln -s ../../../../usr/bin/ros ${INITRD_DIR}/usr/var/lib/cni/bin/host-local
curl -SL ${!SYSTEM_DOCKER_URL} | tar --strip-components=1 -xzvf - -C ${INITRD_DIR}/usr/bin/
# we have diabled the user-proxy so we get rid of system-docker-proxy
rm -f ${INITRD_DIR}/usr/bin/system-docker-proxy
cat <<HERE > ${INITRD_DIR}/usr/share/ros/os-release
NAME="RancherOS"
VERSION=${VERSION}
@@ -31,7 +33,8 @@ BUG_REPORT_URL="https://github.com/rancher/os/issues"
BUILD_ID=
HERE
# TODO: usr/lib dir is overwritten by the kernel modules and firmware
ln -s ../share/ros/os-release ${INITRD_DIR}/usr/lib/
ln -s ${INITRD_DIR}/usr/share/ros/os-release ${INITRD_DIR}/usr/lib/
ln -s ${INITRD_DIR}/usr/share/ros/os-release ${INITRD_DIR}/usr/etc/
# Support upgrades from old persistent consoles that bind mount these
touch ${INITRD_DIR}/usr/bin/docker-containerd

View File

@@ -13,5 +13,5 @@ for i in $IMAGES; do
done
echo "tar-images: docker save ${IMAGES} > build/images.tar"
docker save ${IMAGES} > build/images.tar
docker save ${IMAGES} | xz > build/images.tar
echo "tar-images: DONE"