UbuntuIRC / 2015 /08 /03 /#juju.txt
niansa
Initial commit
4aa5fce
raw
history blame
30.7 kB
[01:44] <blr> noted today that $LANG is unset in a hook execution context, this can cause problems for some python libraries that (arguably incorrectly) rely on a default system encoding e.g. calling codecs.open()
[02:14] <blr> commented on https://github.com/juju/juju/issues/133 marcoceppi, any thoughts on resolving that one?
[02:45] <jose> Odd_Bloke, rcj: ping
[09:24] <jamespage> gnuoy, cinder is sufferring from inadequet patching in its unit tests
[09:24] <jamespage> they work ok on a real machine - but on a virt machine (like the test environment)
[09:24] <jamespage> vdb is a real device :-)
[09:24] <jamespage> I've worked around this for now by prefixing device names with 'fake'
[09:24] <jamespage> but it will need a wider review - I'll raise a bug task for it
[09:24] <jamespage> for 15.10
[09:35] <jamespage> gnuoy, https://code.launchpad.net/~james-page/charms/trusty/cinder/unit-test-fixes/+merge/266692 review please :-)
[09:35] <jamespage> resolves the cinder test failures for now
[09:40] <gnuoy> jamespage, thanks, merged
[09:50] <Odd_Bloke> jose: Pong.
[09:59] <jamespage> gnuoy, anything else I can help with?
[10:09] <gnuoy> jamespage, well if you wanted to create a skeleton release note I wouldn't hold it against you ...
[10:11] <jamespage> gnuoy, ok
[10:22] <jamespage> gnuoy, dosaboy, beisner: https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ReleaseNotes1507
[10:22] <gnuoy> ta
[10:25] <jamespage> gnuoy, deploy from source needs to coreycb input - is he back today?
[10:26] * jamespage goes to look
[10:43] <gnuoy> jamespage, looks like we have charm upgrade breakage.
[10:43] <gnuoy> rabbit is refusing connections from neutron-gateway after the upgrade (invalid creds). investigating now
[10:45] <gnuoy> Actually, it's refusing connections from everywhere by the looks of it
[11:03] <jamespage> gnuoy, that would indicate a crappy rabbitmq upgrade methinks
[11:03] <jamespage> gnuoy, is that on 1.24?
[11:06] <jamespage> gnuoy, I'll take a peek
[11:15] <jamespage> gnuoy, I think I see a potential bug in the migration code in peerstorage
[11:19] <gnuoy> jamespage, sorry, was stuffing food in my face
[11:19] <gnuoy> jamespage, yes, 1.24
[11:20] <gnuoy> jamespage, am all ears viz-a-viz peerstorage migration bug. I still have my environment so can supply additional debug if helpful
[11:20] <jamespage> gnuoy, L97
[11:20] <jamespage> there is a relation_get without a rid
[11:21] <gnuoy> ah
[11:21] <jamespage> so I think that may cause a regeneration of all passwords if called outside of the cluster relation context
[11:22] <jamespage> gnuoy, grrr - redeploying as dns foobar
[11:28] <gnuoy> jamespage, it definitely looks like the password has changed in rabbit, using "rabbitmqctl change_password" to set it back to what it was seems to fix things.
[11:28] <jamespage> gnuoy, the password changed in rabbit, or the password changed on all the relations?
[11:28] <jamespage> that's my hypothesis
[11:29] <gnuoy> jamespage, the password changed in rabbit
[11:30] <gnuoy> jamespage, well actually, what I'm saying is, the password in the client config and the password advertised down the relations are the same but they don't seem to equal the actual password rabbit has for the user
[11:30] <jamespage> gnuoy, yah
[11:30] <jamespage> that matches my theory - just trying to prove it
[11:30] <gnuoy> kk
[11:31] <jamespage> gnuoy, there is no code in the charm that changes passwords in rabbit, but it would ignore a change triggered by a broken migration - that would propagate out to related services, but not reflect the actual password
[11:32] <beisner> o/ good morning
[11:33] <beisner> gnuoy, afaict, reverse dns is/was a-ok. host entries are coming and going with instances.
[11:34] <gnuoy> beisner, it worked straight through on my bastion with the only error being trusty/git/icehouse. I've scheduled another run but it's been in the queue for ~4hours
[11:34] <beisner> gnuoy, however, i have observed that due to rmq-funk in serverstack, some messages are really delayed. that is observable in that serverstack-dns may not always have the message back and the reverse dns record added by the time the instances is already booted an on its way. :-/
[11:35] <beisner> gnuoy, saw this as well on ddellav's bastion as we were t-shooting a failed re-(re)deploy
[11:35] <jamespage> my dns appears foobarred right now
[11:35] <jamespage> I thought I just fixed it up
[11:35] <beisner> gnuoy, throttle is way down. if we turn it up to have more concurrency, serverstack gives us error instances.
[11:36] <beisner> i just removed 6 error instances from last night (which induced some job fails)
[11:36] <jamespage> beisner, indeed - I have partial entries for my dns
[11:36] <gnuoy> jamespage, I don't follow the scenario you outlined. broken migration?
[11:36] <gnuoy> I assume you don't mean db migration
[11:36] <jamespage> gnuoy, yeah - the migration incorrectly missed the peer relation data, so generates a new password
[11:37] <jamespage> gnuoy, peer -> leader migration
[11:37] <gnuoy> oh, yes, of course that migration
[11:37] <gnuoy> jamespage, so rabbit is pushing out a new password to the clients without actually changing the password for the user to the new value?
[11:38] <jamespage> yeah
[11:38] <gnuoy> oh /o\
[11:38] <jamespage> I think that's the case, but can't get an env up right now
[11:40] <jamespage> beisner, gnuoy: it would appear notifications are going astray somewhere on serverstack
[11:40] <beisner> jamespage, oh yeah ... also observable in not always getting an instance; juju sits at "allocating..."
[11:41] <beisner> meanwhile nova knows nothing of the situation
[11:41] <beisner> but, on the jobs ref'd in bugs, i've run, re-run, and re-confirmed that things went well for those runs, afaict.
[11:42] <jamespage> beisner, hmm
[11:44] <beisner> jamespage, gnuoy - mojo os-on-os deploy test combos all pass. bear in mind, that just fires up an instance on the overcloud, checks it, and tears down. http://10.245.162.77:8080/view/Dashboards/view/Mojo/job/mojo_runner/
[11:44] <beisner> so there's a \o/ !
[11:45] <beisner> jamespage, gnuoy - the bare metal equivalent of that ^ is also almost all green. re-running a T-K fail. http://10.245.162.77:8080/view/Dashboards/view/Mojo/job/mojo_runner_baremetal/
[12:03] <jamespage> gnuoy, do we have a bug open for the rmq upgrade problem?
[12:03] <jamespage> the password def gets missed during the migration
[12:03] <gnuoy> jamespage, nope, I'll create one now
[12:07] <beisner> jamespage, gnuoy: fyi just deployed T-I/next. vgs and lvs come back "no volume groups found." added to bug 1480504
[12:07] <mup> Bug #1480504: Volume group "cinder-volumes" not found <amulet> <openstack> <uosci> <cinder (Juju Charms Collection):New> <https://launchpad.net/bugs/1480504>
[12:08] <gnuoy> jamespage, Bug #1480893
[12:08] <mup> Bug #1480893: Upgrading from stable to devel charm breaks clients <rabbitmq-server (Juju Charms Collection):New> <https://launchpad.net/bugs/1480893>
[12:13] <jose> Odd_Bloke: hey, I'm getting some errors with the ubuntu-repository-cache charm, the start hook is failing
[12:14] <jose> let me run and do a pastebin of the output
[12:17] <jamespage> gnuoy, dosaboy: added some detail to that bug - I need to take an hour out - maybe dosaboy could look at a fix in the meantime?
[12:17] <jamespage> otherwise I'll pickup when I get back
[12:18] <gnuoy> jamespage, dosaboy, I can take a look
[12:19] <jamespage> gnuoy, ta - i think the migration code needs to switch to always resolving the using the rid for the cluster relation - or get passed that from high up the stack (its not currently)
[12:19] <gnuoy> kk
=== psivaa is now known as psivaa-lunch
[12:28] <jose> Odd_Bloke: lmk once you're back around please
[12:28] <Odd_Bloke> jose: o/
[12:29] <jose> Odd_Bloke: hey. I'm getting an error on the start hook of the ubuntu-repository-cache charm, says 'permission denied' for /srv/www/blahblah
[12:29] <jose> I'm having some issues with GCE right now so haven't been able to launch the instance
[12:29] <Odd_Bloke> Oh, hmph.
[12:29] <Odd_Bloke> Let me see if I can reproduce.
[12:29] <jose> cool
[12:30] <jose> I'll try to run again
[12:30] <Odd_Bloke> jose: Are you using any config, or just the defaults?
[12:30] <jose> Odd_Bloke: defaults here
[12:40] <Odd_Bloke> jose: Cool, waiting for my instances now. :)
[12:41] <jose> I wish I could say the same...
[12:42] <Odd_Bloke> :p
[12:49] <Odd_Bloke> jose: I'm seeing a failure in the start hook; let me dig in to it.
[12:49] <Odd_Bloke> Some of the charmhelpers bits changed how they do permissions, so it's probably an easy fix.
[12:49] <jose> cool, I thought that but wasn't sure
[12:50] <Odd_Bloke> jose: Do you have a recommendation for quickly testing new versions of charms? Is there something I can do with containers, or something?
[12:50] <jose> Odd_Bloke: oh, definitely! wall of text incoming
[12:51] <jose> so, ssh into the failing instance. then do sudo su. cd /var/lib/juju/agents/unit-ubuntu-repository-cache-0/charm/hooks/
[12:51] * Odd_Bloke braces for impact.
[12:51] <jose> edit start from there
[12:51] <jose> then save your changes and do a juju resolved --retry ubuntu-repository-cache/0
[12:51] <jose> and if it goes well it should go out of error state
[12:52] <jose> just copy the exact same changes you did on the unit to your local charm and commit + push
[12:54] <jose> DHX should be a good tool too, but I can't give much insight on how it works and its usage
[12:55] <coreycb> jamespage, hey I'm back, need input for something?
[12:58] <Odd_Bloke> Hmph, I'm sure we saw this problem before and I fixed it.
[12:59] <Odd_Bloke> I guess I did trash my old charm-helpers merge branch, which might have been where I fixed it.
[13:00] <jose> probably missed that one bit :)
[13:01] <jamespage> coreycb, yeah - could you check the deploy from source release notes pls?
[13:01] <jamespage> coreycb, https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ReleaseNotes1507
[13:06] <jamespage> gnuoy, how far did you get?
[13:06] <gnuoy> jamespage, so...
[13:07] <gnuoy> I don't think which specify rid any higher
[13:07] <jamespage> gnuoy, ?
[13:07] <gnuoy> since leader_get is supposed to mimic leader-get
[13:07] <jamespage> gnuoy, well in the scope of peerstorage, its whatever we make it :-)
[13:07] <jamespage> as we have a wrapper function there
[13:08] <gnuoy> jamespage, as for line 98, peer_setting = _relation_get(attribute=attribute, unit=local_unit(), rid=valid_rid)
[13:08] <gnuoy> does fix it
[13:08] <jamespage> yah
[13:08] <gnuoy> jamespage, if you use relation_get you get an inifinte loop which is fun
[13:08] <jamespage> gnuoy, I was thinking - http://paste.ubuntu.com/11993006/
[13:09] <jamespage> less the debug
[13:09] <jamespage> gnuoy, this has potential to impact of pxc and stuff right?
[13:09] <gnuoy> jamespage, yes the whole caboodle
[13:10] <jamespage> grrr
[13:10] <jamespage> gnuoy, infact I'm surprise everything else is still working :-)
[13:10] <gnuoy> jamespage, +1 to your fix given the point you make about the scope of leader_get in peer storage
[13:10] <jamespage> gnuoy, ok working on that now
[13:13] <coreycb> jamespage, notes look good, I made a few minor tweaks.
[13:24] <jamespage> dosaboy, gnuoy: https://code.launchpad.net/~james-page/charm-helpers/lp-1480893/+merge/266712
[13:26] <jamespage> that should sort-out the out-of-cluster context migration of peer data to leader storage
[13:28] <gnuoy> jamespage, I'm surprised lint isn't sad about rid being defined twice
[13:30] <gnuoy> jamespage, err ignore me
[13:31] * jamespage was already doing that :-0
[13:31] <jamespage> gnuoy, lol
[13:36] <dosaboy> jamespage: reviewed
[13:38] <jamespage> dosaboy, gnuoy jumped you and landed that
[13:38] <jamespage> dosaboy, I actually think leader_get should not be exposed outside of peerstorage
[13:38] <jamespage> its an internal function imho
[13:38] <jamespage> the api is peer_retrieve
[13:38] <jamespage> which deals with the complexity
[13:42] <dosaboy> jamespage: yup fair enough
[13:45] <jamespage> gnuoy, want me to deal with syncing that to rmq?
[13:50] <gnuoy> jamespage, well, we should sync it across the board
[13:50] <jamespage> gnuoy, +1
[13:50] <jamespage> gnuoy, got that automated yet?
[13:50] <gnuoy> ish
[13:52] <gnuoy> beisner, looks like it's time for another charmhelper sync across the charms. I'll do that now unless you have any objections?
[13:52] <beisner> gnuoy, +1 also ty
[13:56] <Odd_Bloke> jose: My units seems to get stuck in 'agent-state: installing'; any idea how I can work out what's happening?
[13:56] <jose> Odd_Bloke: juju ssh ubuntu-repository-cache/0; sudo tail -f /var/log/juju/unit-ubuntu-repository-cache-0.log (-n 50)
[13:57] <jose> that gives you the output of your scripts
[13:58] <Odd_Bloke> jose: I haven't even got the agent installed yet, so my scripts haven't started.
[13:58] <axino> Odd_Bloke: you'll have to go on the GCE console
[13:58] <axino> Odd_Bloke: and look at "events" (or something) there
[13:58] <jose> Odd_Bloke: oh, huh. if there's a machine error, then juju ssh 0; sudo tail /var/log/juju/all-machines.log
[13:59] <jose> axino: it's probably best to take a look at all-machines.log, last time when I went to the gce console machines simply weren't there and I couldn't find a detailed answer on what was going on :)
[14:00] <axino> jose: there was nothing in all-machines.log last time I had issues :( just events in GCE console (which are a bit hard to find, I must say)
[14:00] <Odd_Bloke> Oh, perhaps I misunderstand the status output.
[14:00] <jose> I'm still learning how to deal with GCE :)
[14:01] <Odd_Bloke> jose: OK, looks like I've fixed it.
[14:01] <Odd_Bloke> Let me push up a MP.
[14:01] <jose> woohoo! \o/
[14:02] <Odd_Bloke> jose: https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/fix-perms/+merge/266724
[14:03] <jose> taking a look
[14:05] <Odd_Bloke> jose: So host.mkdir creates parents, so that line is unnecessary, and forces permissions to something that is broken.
[14:05] <Odd_Bloke> jose: So we can just lose that line.
[14:06] <jose> as long as it works we're good :P
[14:16] <Odd_Bloke> jose: It does. :)
[14:17] <jose> 'running start hook'
[14:17] <Odd_Bloke> Oh, you're _testing_ it?
[14:17] <Odd_Bloke> Pfft.
[14:17] <jose> I am :)
[14:17] <jose> need to
[14:18] <Odd_Bloke> :)
[14:18] <Odd_Bloke> As well you should.
[14:18] <sto> Is anyone working on a charm to install designate? I want to try it on my openstack deployment and I'll be happy to test a charm instead of installing it by hand (I have no experience writing charms right now)
[14:19] <jose> sto: I'm sorry, but I don't know what designate is. maybe you have a link to its website?
[14:19] <sto> jose: it is an openstack service https://github.com/openstack/designate
[14:20] <jose> oh
[14:20] <sto> And it is already packaged
[14:20] <jose> unfortunately, I don't see a designate charm on the store. sorry :(
[14:21] <jose> but maybe an openstack charmer can work on it? :)
[14:21] <jamespage> gnuoy, I'm going to switch to liberty milestone 2 updates - pull me back if you need hands
[14:21] <sto> Yes, I know that there is no charm on the store, thats why I was asking... ;)
[14:21] <jamespage> its not critical but would like to push it out soonish
[14:22] <gnuoy> ok, np
=== natefinch is now known as natefinch-afk
[14:22] <beisner> gnuoy, just lmk when the c-h sync is all pushed, and i'll run metal tests. probably with some sort of heavy metal playing.
[14:22] <gnuoy> beisner, crank up Slayer, c-h sync is all pushed
[14:23] <beisner> gnuoy, jamespage - wrt that cinder bug, it's with the default lvm-backed storage where i'm seeing breakage. works fine with ceph-backed storage. bug updated with that lil tidbit.
[14:23] <gnuoy> sto I heard people talking about creating a charm but I'm not sure it ever got past the hot air stage
[14:23] <beisner> gnuoy, awesome thanks
[14:24] <beisner> gnuoy, isn't that on our list-o-stuff to add more official support for in the os-charms?
[14:24] <gnuoy> I think Barbican and Designate were high on the list
[14:25] <beisner> yep that sounds right.
[14:29] <jose> Odd_Bloke: woohoo! it looks like it deployed cool!
[14:29] <jose> I'm gonna give it a quick test ride and merge
[14:32] <sto> gnuoy: ok, thanks, I' guess I'll install it by hand on a container to see how it works
[14:34] <jose> Odd_Bloke: woot woot! works works works works!
[14:34] <gnuoy> jamespage, beisner Trusty Icehouse, stable -> next upgrade test ran through cleanly. thanks for the patch Mr P.
[14:35] <beisner> gnuoy, jamespage \o/
[14:35] <beisner> gnuoy, do you have a modified upgrade spec to deal with qg:ng?
[14:36] <gnuoy> beisner, yes, I'm running from lp:~gnuoy/openstack-mojo-specs/mojo-openstack-specs-ha
[14:36] <Odd_Bloke> jose: \o/
[14:36] <jose> Odd_Bloke: should be merged. thanks a bunch for the quick fix, really appreciated!
[14:37] <Odd_Bloke> jose: No worries, thanks for the quick merge. :)
[14:42] <beisner> gnuoy, these guys didn't get a c-h sync, is that by design?: n-g, pxc, n-ovs, hacluster, ceph-radosgw, ceph-osd
[14:42] <gnuoy> beisner, I'll check, they may not be using the module that changed (but I'd have thought pxc was tbh)
[14:53] <gnuoy> beisner, sorry about that, done now (no change for n-ovs)
[14:53] <gnuoy> oh, cause it did work the first time
=== psivaa-lunch is now known as psivaa
=== JoshStrobl is now known as JoshStrobl|AFK
[15:50] <jcastro> marcoceppi: oh hey I forgot to ask you if everything with rbasak/juju in distro is ok?
[15:50] <jcastro> anyone need anything from me?
[15:51] <marcoceppi> jcastro: I have to fix something in the packaing and upload it, I'm about to do a cut of charm-tools and such so I'll fix those then
[15:51] <jcastro> ack
[15:54] <jamespage> beisner, suspect that regex is causing the issue - reconfirmning now
[15:54] <beisner> jamespage, ack ty
[16:12] <beisner> beh. look out, gnuoy, jamespage - i just got 11 ERROR instances on serverstack ("Connection to neutron failed: Maximum attempts reached")
[16:21] <jamespage> beisner, sniffs like rmq
[16:22] * beisner must eat, biab...
=== Guest11873 is now known as zz_Guest11873
[16:42] <apuimedo> lazyPower:
[16:42] <lazyPower> apuimedo: o/
[16:42] <apuimedo> lazyPower: how are you doing?
[16:42] <lazyPower> Pretty good :) Hows things on your side of the pond?
[16:43] <apuimedo> warm
[16:43] <apuimedo> :-)
[16:43] <apuimedo> lazyPower: I have a charm that at deploy time needs to know the public ip it will have
[16:44] <apuimedo> usually what I was doing was add a machine, and then knowing the ip change the deployment config file
[16:44] <lazyPower> apuimedo: unit-get public-address should get you situated with that though
[16:44] <apuimedo> but I was wondering if it were possible in the install script to learn the public ip
[16:44] <apuimedo> ok
[16:44] <apuimedo> that's what I thought
[16:45] <apuimedo> and it's the same the other machines in the deployment will see it with, right?
[16:45] <apuimedo> lazyPower: so unit_public_ip should do the trick
[16:46] <apuimedo> hookenv.unit_public_ip
[16:47] <lazyPower> yep
[16:47] <lazyPower> and looking at teh source, that wraps unit-get public-address :)
=== JoshStrobl|AFK is now known as JoshStrobl
[16:59] <apuimedo> ;-)
[16:59] <apuimedo> thanks
[17:06] <lazyPower> np apuimedo :)
=== zz_Guest11873 is now known as CyberJacob
[17:43] <beisner> gnuoy, jamespage - the heat charm does have a functional usability issue, though not a deployment blocker, nor a blocker for using heat with custom templates. that is, the /etc/heat/templates/ dir is just awol. bug 1431013 looks to have always been this way, so prob not crit for 1507/8 rls.
[17:43] <mup> Bug #1431013: Resource type AWS::RDS::DBInstance errors <amulet> <canonical-bootstack> <openstack> <uosci> <heat (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1431013>
=== natefinch-afk is now known as natefinch
[17:44] <lazyPower> ejat-: Hey, how did you get along this weekend? I wound up being AFK for a good majority.
[17:47] <beisner> gnuoy, jamespage, coreycb ... aka ... ^ our "one" remaining tempest failure to eek out ;-) http://paste.ubuntu.com/11994632/
[17:49] <coreycb> beisner, would that fixup the rest of the failing smoke tests?
[17:49] <beisner> see paste ... we are down to that 1
[17:49] <beisner> after some merges and template tweaks today
[17:50] <beisner> coreycb, i'm installing heat from package in a fresh instance just to see if the templates dir is awol there (ie. without a charm involved).
[17:54] <beisner> coreycb, yeah so these files and this dir don't make it into /etc/heat/templates when installing on trusty. http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/trusty/heat/trusty/files/head:/etc/heat/templates/
[17:54] <coreycb> beisner, that's awesome, down to 1
[17:55] <coreycb> beisner, might be a packaging issue
[17:56] <beisner> coreycb, yeah, woot!
[17:56] <beisner> coreycb, and ok, bug updated, she's all yours ;-)
[17:57] <coreycb> beisner, thanks yeah I'll dig deeper later, need to get moving on stable kilo
[17:57] <beisner> coreycb, yep np. thanks!
[18:16] <beisner> gnuoy, 1.24.4 is in ppa:juju/proposed re: email, when you next exercise the ha wip spec(s), can you do that on 1.24.4?
[18:34] <ddellav> jamespage, your requested changes have been made and all tests updated: https://code.launchpad.net/~ddellav/charms/trusty/glance/upgrade-action/+merge/265592
[19:50] <jamespage> beisner, reverting that regex change resolves the problem
[19:50] <jamespage> with cinder
[19:51] <beisner> jamespage, ah ok. i can't find context on that original c-h commit @ http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/revision/409
[19:52] <jamespage> beisner, [daniel-thewatkins] Detect full disk mounts correctly in is_device_mounted
[19:52] <beisner> jamespage, yep, saw that, but was looking for a merge proposal or bug to tie it to.
[19:53] <beisner> (being that that fix breaks this charm)
[20:02] <beisner> jamespage, i suspect context is: http://bazaar.launchpad.net/~charmers/charms/trusty/ubuntu-repository-cache/trunk/view/head:/lib/ubuntu_repository_cache/storage.py#L131
[20:02] <beisner> s/is/was/
[20:03] <jamespage> beisner, I'm actually wondering whether that charm-helpers change has uncovered a bug
[20:04] <jamespage> beisner, huh - yeah it does
[20:04] <beisner> ooo oo a cascading bug
[20:04] <jamespage> beisner, /dev/vdb was getting missed on instances, so got added to the new devices list before
[20:04] <jamespage> no longer true
[20:04] * jamespage scratches his head for a fix
[20:08] <jamespage> beisner, the charm does not have configuration semantics that support re-using a disk that's already mounted
[20:09] <jamespage> beisner, overwrite specific excludes disks already in use - its a sorta failsafe
[20:09] <jamespage> beisner, I could do a ceph type thing for testing
[20:12] <beisner> jamespage, ok so vdb is mounted @ /mnt, and with that c-h fix, the is it mounted helper actually works. whereas all along we've just been clobbering vdb? is that about right?
[20:12] <jamespage> beisner, yup
[20:13] <beisner> jamespage, ok i see it clearly now.
[20:19] <jamespage> beisner, ok - testing something now
[20:21] <jamespage> beisner, https://code.launchpad.net/~james-page/charms/trusty/cinder/umount-mnt/+merge/266803
[20:21] <jamespage> testing now
[20:22] <beisner> sweet. oh look you even updated the amulet test. i was just thinking: i'll need to update a config option there.
[20:23] <beisner> jamespage, if this approach is what we stick with, i'll update o-c-t bundles
[20:23] <jamespage> beisner, how else would I test my change? ;)
[20:24] <beisner> jamespage, well that's the shortest path for sure!
[20:24] <jamespage> beisner, longer term filesystem_mounted should go to charm-helpers
[20:24] <jamespage> but for tomorrow here is fine imho
[20:28] <beisner> jamespage, ack
[20:39] <jamespage> beisner, passed its amulet test for me
[20:39] <jamespage> beisner, https://code.launchpad.net/~james-page/charms/trusty/cinder/umount-mnt/+merge/266803
[20:39] <jamespage> gnuoy, ^^ or any other charmer
[20:39] <jamespage> beisner, I've not written a unit test which makes me feel guilty
[20:39] <jamespage> but I need to sleep
[20:40] <marcoceppi> jamespage: idk, lgtm
[20:40] <beisner> jamespage, lol
[20:40] <jamespage> marcoceppi, ta
[20:40] <beisner> jamespage, yes, i believe this will do the trick. thanks a ton. i've updated and linked the bug.
[20:40] <marcoceppi> jamespage: maybe just default ephemeral-mount to /mnt ?
[20:41] <jamespage> marcoceppi, meh - I'd prefer to keep it aligned to ceph
[20:41] <marcoceppi> jamespage: and I really don't care enough either way
[20:41] <jamespage> marcoceppi, just in case someone did have /mnt mounted as something else :-)
[20:41] <jamespage> marcoceppi, and really did not want it unmounted
[20:41] * marcoceppi nods
[20:41] <jamespage> marcoceppi, this is really a testing hack
[20:41] <marcoceppi> jamespage: yeah, I see that in the amulet test you updated
[20:42] <jamespage> beisner, ok - going to land that now
[20:42] <beisner> jamespage, yep +1
[20:43] <jamespage> beisner, done - to bed with me!
[20:43] <jamespage> nn
[20:43] <beisner> jamespage, thanks again. and, Odd_Bloke thanks for fixing that bug in is_device_mounted.
[20:50] <Odd_Bloke> beisner: :)
[21:00] <moqq> how do i deal with an environment that seems completely stalled? when i try ‘juju status’ it just hangs indefinitely
[21:10] <marcoceppi> moqq: is the bootstrap node running?
[21:11] <marcoceppi> what provider are you using?
[21:12] <moqq> marcoceppi: yes, machine-0 service is up. manual provider
[21:12] <marcoceppi> moqq: can you ssh into the machine?
[21:12] <moqq> yep
[21:12] <marcoceppi> moqq: `initclt list | grep juju`
[21:13] <moqq> marcoceppi: http://pastebin.com/dUqwsTez
[21:14] <marcoceppi> moqq: sweet! VoltDB
[21:14] * marcoceppi gets undistracted
[21:14] <moqq> haha
[21:14] <marcoceppi> moqq: try `sudo restart jujud-machine-0`
[21:14] <marcoceppi> give it a few mins
[21:14] <marcoceppi> then juju status
[21:15] <marcoceppi> also, are you out of disk space? `df -h`?
[21:15] <moqq> no plenty of space. and restarting the service to no avail, have cycled it a good handful of times
[21:15] <marcoceppi> moqq: have you ccycled the juju-db job as well?
[21:15] <marcoceppi> that's the next one
[21:15] <moqq> yeah
[21:15] <marcoceppi> moqq: time to dive into the logs
[21:16] <marcoceppi> what's the /var/log/juju/machine-0 saying?
[21:16] <moqq> marcoceppi: http://pastebin.com/KWDXACvD
[21:17] <marcoceppi> moqq: were you running juju upgrade-juju ?
[21:17] <moqq> yeah at one point i tried to and it failed
[21:17] <marcoceppi> moqq: from what version?
[21:18] <marcoceppi> moqq: this may be a bug that was fixed recently, and if so there's a way to recover still
[21:18] <moqq> 1.23.something -> 1.24.4
[21:18] <moqq> iirc
[21:19] <marcoceppi> moqq: what does `ls -lah /var/lib/juju/tools` look like?
[21:20] <moqq> marcoceppi: http://paste.ubuntu.com/11996021/
[21:20] <marcoceppi> moqq: this should help: https://github.com/juju/docs/issues/539
[21:20] <marcoceppi> moqq: you'll need to do that for all of the symlinks
[21:21] <marcoceppi> moqq: so, stop all the agents first
[21:21] <marcoceppi> then that
[21:21] <marcoceppi> then start them all up again, with juju-db and machine-0 being the first and second ones you bounce
[21:21] <moqq> ok thanks. on it
[21:22] <beisner> coreycb, around? if so can you land this puppy?: https://code.launchpad.net/~1chb1n/charms/trusty/hacluster/amulet-extend/+merge/266355
[21:26] <coreycb> beisner, sure, but is that branch frozen for release?
[21:29] <moqq> thanks marcoceppi that did the trick!
[21:30] <marcoceppi> moqq: awesome, glad to hear that. It was only a bug that existed in 1.23, so going forward you shouldn't have an issue with upgrades *related to this*
[21:30] <moqq> ok excellent
[21:30] <moqq> now, its gotten me to 1.24.3
[21:30] <moqq> but it seems to be refusing to go to 1.24.4
[21:31] <moqq> ubuntu@staging-control:/var/lib/juju/tools$ juju upgrade-juju --version=1.24.4 >>> ERROR no matching tools available
[21:31] <marcoceppi> moqq: 1.24 is a proposed release
[21:31] <marcoceppi> moqq: you need to set your tools stream to proposed instead of released
[21:31] <marcoceppi> moqq: I'd honestly just wait until it's released (in a few days)
[21:31] <moqq> i’m pretty sure i already did. juju has been constently chewing up 100% of all of our cpus
[21:32] <moqq> so i was hoping the .4 upgrade would fix that
[21:32] <marcoceppi> ah
[21:32] <moqq> cuz if its not solved soon i have to rip out juju and switch to puppet or chef
[21:33] <marcoceppi> moqq: hum, juju using 100% shouldn't happen
[21:33] <marcoceppi> is there a bug already for this?
[21:33] <moqq> yeah https://bugs.launchpad.net/juju-core/+bug/1477281
[21:33] <mup> Bug #1477281: machine#0 jujud using ~100% cpu, slow to update units state <canonical-bootstack> <canonical-is> <performance> <juju-core:Triaged> <https://launchpad.net/bugs/1477281>
[21:34] <marcoceppi> moqq: looks like this was reported with 1.23, is it still chewing 100% cpu on 1.24.3?
[21:35] <moqq> it looks fine for the moment. but when i did this upgrade on the other env earlier it was fine for 20m then went back to spiking
[21:35] <moqq> going to watch it
[21:36] <marcoceppi> moqq: if it does spike up and start chewing 100% again, def ping me in here and update that bug saying it's still a problem, it's not targeted at a release so it's really not on the radar atm
[21:37] <marcoceppi> moqq: as to your other question about 1.24.4, what does `juju get-env agent-stream` say?
[21:38] <moqq> marcoceppi: ok, will do
[21:38] <moqq> apparently >>> ERROR key "agent-stream" not found in "staging" environment
[21:39] <marcoceppi> moqq: haha, well that's not good
[21:39] <marcoceppi> well, that's not bad either
[21:39] <marcoceppi> jsut, interesting
[21:40] <marcoceppi> moqq: you could try `juju set-env agent-stream=proposed`, then an other upgrade-juju (per https://lists.ubuntu.com/archives/juju/2015-August/005540.html)
[21:40] <marcoceppi> but if there is no value currently it may not like that
[21:40] <moqq> just gave a warning, but it set ok
[21:41] <marcoceppi> moqq: well, if you feel like being daring you can give it a go
[21:41] <marcoceppi> in the changelog I doin't see any reference to CPU consumption
[22:13] <bdx> core, devs, charmers: Is there a method by which juju can be forced to not overwrite changes to config files on node reboot?
[22:14] <beisner> coreycb, nope we can land passing tests any time.