File size: 7,435 Bytes
4aa5fce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
[14:08] <g3naro> how do i scp files to a juju box?
[14:10] <g3naro> juju scp file 1:
[14:10] <g3naro> or something like thi?
[14:12] <plars> g3naro: juju scp local_file unit_name/num:/remote/path
[14:12] <plars> g3naro: ex: juju scp myfile.tgz myservice/0:/tmp
[14:14] <jose> jcastro: ping
[14:14] <jcastro> yo
[14:18] <jose> jcastro: you pinged me a couple days ago - haven't been on IRC for around a week
[14:18] <jose> you needed something?
=== redelmann is now known as popi_
=== popi_ is now known as redelmann
=== redelmann is now known as s0plete
=== s0plete is now known as redelmann
[14:38] <beisner> gnuoy, thedac - so afaict, that cluster fix resolves the cluster races i was seeing (with LE) re: bug 1486177
[14:38] <mup> Bug #1486177: 3-node native rabbitmq cluster race <amulet> <openstack> <uosci> <rabbitmq-server (Juju Charms Collection):Confirmed for thedac> <https://launchpad.net/bugs/1486177>
[14:40] <thedac> beisner: great. I will be working on a fix for pre leadership election versions today
[14:40] <gnuoy> thedac, beisner, tip top, thanks
[15:07] <beisner> coreycb, can you review/land this?  https://code.launchpad.net/~1chb1n/charms/trusty/swift-storage/amulet-update-1508/+merge/268788
[15:08] <beisner> coreycb, heads up too - swift-proxy, openstack-dashboard shortly behind that.
[15:18] <coreycb> beisner, sure.  I need to get liberty stuff done but then I'll look.
[15:19] <redelmann> anyone know about juju-gui?
[15:19] <redelmann> im trying to debug an issue
[15:19] <redelmann> on ec2 and maas juju-gui is logging" {"RequestId":5,"Error":"unit not found","ErrorCode":"not found","Response":{}}"
[15:20] <redelmann> juju debug-log: error stopping *state.Multiwatcher resource: unit not found
=== scuttle|afk is now known as scuttlemonkey
=== sarnold_ is now known as sarnold
=== wendar_ is now known as wendar
=== urulama is now known as urulama__
[17:06] <mhall119> help, trying to re-connect to my old canonistack environment after a long time of ignoring it, now juju gives me: WARNING unknown config field "tools-url"
[17:07] <mhall119> and doesn't do anything
[17:10] <g3naro> maybe try removing that option from your config ?
[17:40] <thedac> beisner: if you have time can you independently test juju < 1.24 against lp:~thedac/charms/trusty/rabbitmq-server/native-cluster-race-fixes and also make sure it did not regress for >= 1.24. I'll be running similar tests as well.
[17:59] <bbaqar> Which branch of charmhelpers should I propose my changes in if I want them in each of the openstack charms?
=== scuttlemonkey is now known as scuttle|afk
[18:31] <beisner> thedac, thank you.  yes, i'll cycle both.
[19:18] <mattrae> hi, i'd like to use the openstack provider.. is there an option to specify the object store endpoint?
[19:18] <mattrae> i can't seem to find it
[19:32] <jcastro> Juju office hours in 30 minutes!
[19:33] <rick_h_> jcastro: is there a topic or general Q/A?
[19:34] <jcastro> general office hours
[19:34] <jcastro> so like if someone shows up with an agenda that becomes the agenda
[19:38] <jcastro> rick_h_: we haven't had a UI guy in a while if you want to fill us all in
[19:38] <rick_h_> jcastro: ok, debating showing up but I don't have an agenda. Just to cheer or such :)
[19:38] <jcastro> well, jrwren shows up but he never knows what he's working on
[19:38] <rick_h_> :P
[19:38] <rick_h_> jcastro: k, linky me happy to jump in
[19:39] <jcastro> rick_h_: I'll file up the hangout in about 15
[19:39] <rick_h_> alexisb: what were we talking about hte other day about getting notice about?
[19:39] <jcastro> also if anyone from juju-core wants to hop in that'd be awesome
[19:39] <jcastro> wwitzel3: ^^^
[19:39] <jcastro> beisner: if you've got time for some openstack charm updates since you guys just had a release ...
[19:41] <wwitzel3> jcastro: sure
[19:45] <jcastro> https://plus.google.com/hangouts/_/hoaevent/AP36tYd2-532QvR_YgYczuO1Np1AHT7LT9PBI5Hw-YeiJNflAe0_bQ
[19:45] <jcastro> rick_h_: wwitzel3: cory_fu ^^^^
[19:46] <jrwren> jcastro: i can't talk about what I'm working on :p
[19:46] <cory_fu> kwmonroe: ^^
[20:02] <rick_h_> linky: https://jujucharms.com/docs/devel/charms-bundles
[20:11] <rick_h_> jcastro: linky: https://github.com/juju/charmstore/blob/v5-unstable/docs/bundles.md
[20:14] <rick_h_> jcastro: https://jujucharms.com/docs/devel/wip-systems
[20:14] <rick_h_> jcastro: https://jujucharms.com/docs/devel/wip-users
[20:18] <kwmonroe> hey rick_h_, is "bundle" the right source branch name for bundles?  or "trunk", or will either work?
[20:19] <rick_h_> kwmonroe: it's bundle I think.
[20:19] <rick_h_> kwmonroe: trunk is for charms
[20:19] <kwmonroe> cool
[20:19] <kwmonroe> ack
[20:19] <rick_h_> kwmonroe: I think the diff was done as part of 'telling what's what' but it's history and not sure tbh
[20:24] <wwitzel3> workload devel branch: https://github.com/juju/juju/tree/feature-proc-mgmt
[20:24] <wwitzel3> jcastro: ^
[20:25] <kwmonroe> realtime syslog analytics bundle: https://jujucharms.com/u/bigdata-dev/realtime-syslog-analytics
[20:41] <cory_fu> http://interfaces.juju.solutions/
[20:42] <rick_h_> jcastro: https://jujucharms.com/q/db-admin
[20:43] <rick_h_> jcastro: https://github.com/juju/charmstore/blob/v5-unstable/docs/API.md#search
[20:48] <wwitzel3> jcastro: https://insights.ubuntu.com/event/juju-charmer-summit-2015/
[20:49] <Mortin> cool walkthru thx for stream :)
[20:59] <mhall119> jcastro: what does "agent-state: down" mean? Does it mean the instance is down, or just something with juju?
[20:59] <jcastro> it means the juju agent itself is down
[21:00] <mhall119> the controlling node?
[21:00] <jcastro> is this on a new deployment?
[21:00] <jcastro> no, the agent on that node
[21:00] <mhall119> no, old canonistack one that I haven't touched in months
[21:00] <marcoceppi> mhall119: it means that juju can't speak to the agent that machines is running on
[21:00] <marcoceppi> either the agent crashed or the machine is no longer reachable on the network (taken offline, networking changed, etc)
[21:01] <mhall119> ok, can I juju destroy-environment when it's like this? or might that leave orphaned instances
[21:01]  * mhall119 things canonistack might have moved recently 
=== natefinch is now known as natefinch-afk
[21:02] <mhall119> s/things/thinks/
[21:07] <hazmat> mhall119: if you remove machine --force it should do the trick, if its the bootstrap node, yeah destroy env force should do the trick (sans orphans)
[21:26] <hazmat> marcoceppi: if your around on wednesday, i'm doing an ansible talk at the modev meetup .. its right off the silver line at mclean stop
[21:26] <marcoceppi> hazmat: sounds sweet!
[21:28] <marcoceppi> hazmat: I just RSVP'd thanks for the heads up
[21:33] <jcastro> https://insights.ubuntu.com/2015/08/24/a-midsummer-nights-juju-office-hours/
[21:40] <hazmat> rick_h_:  just read through new bundle thingy in jorge's link above, doesn't support containers as machines per description
[22:21] <rick_h_> hazmat: nested lxc's were fixed in a pr I believe. It's not landed yet. Waiting on review?
[22:21] <rick_h_> hazmat: I know we had to fix something with that for the OS bundle case and we're running a deployer fork atm for that to work.
[22:22] <rick_h_> hazmat: if I'm misunderstanding let me know/have an example and we'll get it fixed up.
[22:30] <arosales> marcoceppi, jcastro: thanks for hosting the most recent office hours and sending out highlights with minute markers