juju-core 1.23-beta1
Milestone information
- Project:
- juju-core
- Series:
- 1.23
- Version:
- 1.23-beta1
- Released:
- Registrant:
- Curtis Hovey
- Release registered:
- Active:
- No. Drivers cannot target bugs and blueprints to this milestone.
Activities
- Assigned to you:
- No blueprints or bugs assigned to you.
- Assignees:
- 5 Anastasia, 2 Andrew Wilkins, 2 Bodie Solomon, 12 Dimiter Naydenov, 7 Eric Snow, 1 Francesco Banconi, 2 Frank Mueller, 7 Horacio Durán, 8 Ian Booth, 5 James Tunnicliffe, 1 Jesse Meek, 1 John Weldon, 2 Katherine Cox-Buday, 1 Marco Ceppi, 4 Menno Finlay-Smits, 2 Michael Foord, 1 Nate Finch, 2 Tim Penhey, 2 Wayne Witzel III, 1 William Reade
- Blueprints:
- No blueprints are targeted to this milestone.
- Bugs:
- 70 Fix Released
Download files for this release
Release notes
# juju-core 1.23-beta1
A new development release of Juju, juju-core 1.23-beta1, is now available.
This release replaces 1.22.0.
## Upgrades from 1.22 are broken
As seen in Lp 1434680 1.22.0 environments cannot upgrade to 1.23-beta1.
Upgrading environments from earlier versions of Juju, such as 1.21,
should work.
## Getting Juju
juju-core 1.23-beta1 is available for vivid and backported to earlier
series in the following PPA:
https:/
Windows and OS X users will find installers at:
https:/
Development releases use the 'devel' simple-streams. You must configure
the 'agent-stream' option in your environments.yaml to use the matching
juju agents.
agent-stream: devel
Upgrading from stable releases to development releases is not
supported. You can upgrade test environments to development releases
to test new features and fixes, but it is not advised to upgrade
production environments to 1.23-beta1. In particular, upgrades from 1.22
are broken. See above.
## Notable Changes
* New Blocks
* New Style Restore
* Addressable LXC and KVM Containers on EC2 and MAAS
* Improved Proxy Support for Restrictive Networks
* New Charm Actions
* New Support for Google Compute Engine (GCE)
* Service Leader Elections
* Support for systemd (and Vivid)
### New blocks
You can now specify block message when you enable a block. For example,
you can add a message to 'destroy-
juju block destroy-environment "Don't destroy this environment"
juju destroy-environment
ERROR Don't destroy this environment
You can list the blocks enabled in the environment like so:
juju block list
destroy-
remove-
all-changes=off
The Multiwatcher now has information about blocks. There is now block
client capable of switching blocks on/off as well as listing all enabled
blocks.
### New Style Restore
You can now restore a backup with the new 'backups restore' command,
which is more reliable and fast. New restore supports backups generated
with the deprecated Juju backup plugin and with the recently added 'juju
backups create' command. You can restore from a local backup file like
so:
juju backups restore [-b] --file <backup file>
Which will optionally bootstrap a new state server, upload a backup file
and restore it. The -b flag will fail if there is a running state
server.
You can also restore from a backup stored on the state-server:
juju backups restore --id <on server backup id>
To obtain a list of the existing backups in the state-server you can
use:
juju backups list
### Addressable LXC and KVM containers on EC2 and MAAS
The Juju EC2 and MAAS providers now support starting LXC and KVM
containers with statically allocated IP addresses from the same subnet
as their host machine. This means workloads inside containers have the
same network connectivity as workloads deployed on machines. Nothing
special is needed to benefit from this feature, as Juju detects whether
address allocation is supported and handles the necessary steps
automatically. Example:
juju deploy wordpress --to lxc:0
juju add-unit mysql --to kvm:1
Once the container is provisioned and started, you can see in 'juju
status' it will have an address from the same subnet range as its host.
On MAAS, the juju-br0 bridge device is no longer created at initial boot
so that containers can acquire IP addresses via DHCP. Instead, depending
on the container type, the default lxcbr0 (LXC) or virbr0 (KVM) will be
used. This also solves a number of issues with more complex networking.
There are a few known limitations at this stage, but most of them will
be resolved in time for the 1.23 stable release:
* EC2 Ubuntu images Juju uses typically does not support KVM extensions.
* EC2 has limits on the number of additional IPs a certain instance
type can have. If you plan on starting a lot of addressable
containers, please make sure you select a larger instance type. Juju
will eventually expose information like "address limit exhausted" so
such cases will be easier to detect by the user.
http://
* Statically allocated addresses are not yet released on container
shutdown, but a solution to this is already in progress.
* At this stage, Juju does not guarantee every container will have a
static IP at launch, but will make a best effort to do so. If
allocation failed for some reason, every step of the process is
logged, but the container will still come up with a host-local IP
(e.g. 10.0.3.x for LXC and 192.168.122.x for KVM).
* Workloads inside addressable containers can be exposed behind their
host's public IP address, but port conflicts are not detected or
handled. This means for example, if port 80 is taken by a service on
the host, another service in a container listening on port 80 won't
be accessible.
### Improved Proxy Support for Restrictive Networks
A few of issues around HTTP/HTTPS and apt proxy support were fixed (Lp
1403225, Lp 1417617). Charm downloads from the charm store which could
not be completed due to connectivity issues are now retried every few
minutes rather than once every 24 hours. Proxy settings from the
environment (http-proxy, https-proxy, ftp-proxy, apt-http-proxy,
apt-https-proxy, apt-ftp-proxy, and no-proxy) are properly propagated to
all machines, and Juju agents use them for all external connectivity.
The juju run command also uses proxy settings when defined, as well as
debug-hooks and all hooks the a charm runs. You can specify one or more
proxy settings via environment variables (http_proxy, https_proxy, etc.)
or inside your environments.yaml. Other related proxies are configured
as needed (e.g. you can specify just http-proxy, and that will also be
used for https, ftp, and apt proxies).
### New Charm Actions
Juju charms can describe actions that users can take on deployed
services. These actions are scripts that can be triggered on a unit by
the via the Juju CLI (support for triggering actions from the Juju GUI
will follow soon). Schemas for each action are defined in an
actions.yaml file in the charm root, and conform to JSON-Schema. When an
action is invoked, passed parameters are validated against the
respective schema as explained in "Actions for the Charm author" at both
the API and the unit level:
https:/
CLI Actions are sub-commands of the 'juju action' command. For more
details on their usage, 'juju action help' has examples and further
material.
The following subcommands are currently specified:
* defined - show actions defined for a service
* do - queue an action for execution
* fetch - show results of an action by ID
* status - show results of actions filtered by optional ID prefix
### New Support for Google Compute Engine (GCE)
A new provider has been added that supports hosting a Juju environment
in GCE. This feature leverages the support for Ubuntu cloud-images that
GCE added late 2014. It uses Google's GCE API to interact with your
account there. API authentication credentials, as well as other config
options, must be added to your environments.yaml file before running
'juju bootstrap'. The different options are described below.
The basic config options in your environments.yaml will look like this:
my-gce:
type: gce
project-id: <your-project-id>
private-key: <your-private-key>
client-email: <your-client-email>
client-id: <your-client-id>
The values in angle brackets need to be replaced with your GCE information.
'project-id' must identify a GCE project that already exists before you
run "juju bootstrap". This means creating a new one through the
developer console (https:/
bootstrapping Juju. To make it easier to quickly identify in your GCE
console, we recommend that the name start with 'juju-' and that it
include the environment name you are planning to use. You could also
use an existing project but we recommend against that if possible.
Using a new project will make it easier for you to manage the
environment's resources as well as to track the environment's cost and
resource usage.
'private-key', 'client-email', and 'client-id' are your GCE OAuth
credentials. These details are associated with the 'service account' of
the GCE project you will use for your Juju environment. For each GCE
project, a service account is set up automatically when you create
your project. Juju uses that service account to connect to the GCE API
and does so with the proper authentication scope. After you have
created the project go to the following URL to get the
credentials to use in environments.yaml:
https:/
For more information please refer to
https:/
and https:/
If the project's service account has any permissions problems go to the
following page to fix them:
https:/
The GCE API should already be activated for the project. It it isn't,
go to the following URL in your console:
https:/
Also see step 2 on
https:/
The following config options in your environments.yaml file are
optional:
region - (default us-central1) The location to place the
image-endpoint - (default https:/
where Juju will look for disk images when provisioning a
new instance on GCE.
All Juju 1.23 provider capabilities are available for GCE except for
networking.
### Service Leader Elections
We will send a separate announcement about Service Leader Elections.
### Support for systemd (and Vivid)
In addition to upstart, Juju now supports Ubuntu hosts using systemd as
their init system.
Support for systemd allows Juju to run on Ubuntu 15.04 (Vivid Vervet),
which is the first Ubuntu release to boot with systemd. This means you
can bootstrap Juju on a Vivid host. Note that the charm store
(jujucharms.com) only support LTS releases. You can develop and test
vivid charms in a local charm repository.
## Resolved issues
* Allow annotations to be set on charms
Lp 1313016
* Juju-backup is not a valid plugin
Lp 1389326
* Juju needs to support systemd for >= vivid
Lp 1409639
* Joyent provider uploads user's private ssh key by default
Lp 1415671
* Unable to bootstrap on cn-north-1
Lp 1415693
* Debug messages show when only info was asked for
Lp 1421237
* Juju default logging leaks credentials
Lp 1423272
* Juju resolve doesn't recognize error state
Lp 1424069
* Juju status --format=tabular
Lp 1424590
* Ec2 provider unaware of c3 types in sa-east-1
Lp 1427840
* Ec2 eu-central-1 region not in provider
Lp 1428117
* Ec2 provider does not include c4 instance family
Lp 1428119
* Allwatcher does not remove last closed port for a unit, last removed
service config
Lp 1428430
* Make kvm containers addressable (esp. on maas)
Lp 1431130
* Fix container addressability issues with cloud-init, precise, when
lxc-clone is true
Lp 1431134
* Dhcp's "option interface-mtu 9000" is being ignored on bridge
interface br0
Lp 1403955
## Finally
We encourage everyone to subscribe the mailing list at
juju-dev@
Changelog
This release does not have a changelog.