File size: 11,702 Bytes
4aa5fce |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
=== scuttle` is now known as scuttle|afk === scuttle|afk is now known as scuttle` === axw_ is now known as axw [04:55] <blr> Would anyone happen to know if it is possible to debug-hooks both a subordinate and its parent together? Starting debugging for the second service complains that the tmux session already exists. === urulama__ is now known as urulama === zz_CyberJacob is now known as CyberJacob === CyberJacob is now known as zz_CyberJacob [09:00] <ejat> deployed horizon dashboard through juju ... may i know what is the default login n password ? [12:09] <ejat> openstack-dashboard timeout when i tried to login [12:09] <ejat> what should i do ? [12:36] <ejat> marcoceppi: r u there? === skaro is now known as Guest17239 [13:39] <jrwren> can I cancel a pending action? [13:41] <jrwren> use case is: i ran `juju action do db/0 dump` which takes a while. I accidentally ran it twice. I'd like to cancel the pending job. [13:48] <rick_h_> jrwren: I know you can see the queue, thuoght there was a cancel api [13:48] <jrwren> rick_h_: i can't find it. [13:49] <rick_h_> jrwren: hmm, wonder if it made the api but not the cli [13:50] <jrwren> rick_h_: must be. cli has only do, fetch, status [13:50] <aisrael> jrwren: rick_h_: we have a meeting next week about action 2.0 features. I'll add a cancel command to the list [13:51] <rick_h_> aisrael: ah ok, isn't there a method to see the queue and such? [13:51] <rick_h_> I know it was part of the spec [13:53] <aisrael> rick_h_: `juju action status` will show you everything, pending or not, but there's no queue management afaik [13:53] <jrwren> rick_h_: `juju action status` shows all of them. [13:54] <rick_h_> ah ok [13:54] <rick_h_> cool [14:00] <thedac> jamespage: question about ceph and cinder charms. Can we specify different pools like this example http://www.sebastien-han.fr/blog/2013/04/25/ceph-and-cinder-multi-backend/ [14:01] <jamespage> thedac, I think so - let me check [14:01] <thedac> thanks [14:02] <jamespage> thedac, yes - the ceph pool is aligned to the service name - so [14:02] <jamespage> juju deploy cinder-ceph cinder-ceph-sata [14:02] <jamespage> juju deploy cinder-ceph cinder-ceph-ssd [14:02] <jamespage> for example [14:02] <thedac> ah, cool [14:03] <jamespage> thedac, the trick is that the ceph charm does not support special placements (yet) [14:03] <jamespage> thedac, so the backend pools should be pre-created in ceph by hand first - cholcombe has some stuff inflight to enhance pool management [14:04] <firl> so I could set this up by hand manually within ceph, then have cinder-ceph relation charm take care of the relations, and use possibly what cholcombe has to link it between cinder volumes and the ceph pools? [14:06] <jamespage> firl, kinda [14:06] <jamespage> if I deploy: juju deploy cinder-ceph cinder-ceph-sata [14:06] <jamespage> the backend pool must == 'cinder-ceph-sata' [14:07] <jamespage> so if you pre-create or re-create the pool directly in ceph with the required characteristics it will work ok [14:07] <jamespage> juju add-relation cinder-ceph-sata cinder [14:07] <jamespage> and juju add-relation cinder-ceph-sata ceph [14:08] <jamespage> are required of course [14:08] <firl> so I would have 2 cinder-ceph relations [14:08] <firl> and 2 separate ceph charms for the environment [14:08] <firl> ? [14:12] <coreycb> firl, here's a bundle to reference for deploying from source - http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/bundles/source/default.yaml [14:13] <firl> thanks! [14:14] <bdx> firl:http://paste.ubuntu.com/12449099/ [14:17] <coreycb> firl, https://bugs.launchpad.net/charms/+source/nova-compute [14:22] <firl> thedac: https://bugs.launchpad.net/charms/+bug/1497308 [14:22] <mup> Bug #1497308: local repository for all Openstack charms <Juju Charms Collection:New> <https://launchpad.net/bugs/1497308> [14:26] <beisner> wolverineav, https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ReleaseNotes1504 [14:33] <wolverineav> hey, neutron question - when I enable DVR, the DHCP and L3 agent are deployed on the compute node. I'd like to disable the L3 agent completely. Is there a way to do that in the neutron-api charm? [14:33] <wolverineav> or, what would be the way to go about it? === ming is now known as Guest90652 [14:34] <coreycb> jamespage, gnuoy, any idea on this ^ [14:34] <jamespage> wolverineav, DVR enables metadata and l3-agent I think [14:34] <jamespage> there is an extra toggle to enable dhcp as well [14:34] <Guest90652> does juju-core 1.24.2 support CentOS7 on EC2? [14:35] <jamespage> there is no way todo that in the charm right now, as its assummed from the charm choices you're making that you want ml2/ovs driver [14:35] <jamespage> wolverineav, whats the use case? [14:36] <wolverineav> jamespage, yes right. we're currently moving towards pulling the various agents into the big switch controller. the current release supports L3 and the next one will have DHCP and Metadata. [14:37] <jamespage> wolverineav, ok - so in this case, you don't want to use the neutron-openvswitch charm - I'd suggest a neutron-bigswitch charm that dtrt for a big switch deployment [14:37] <jamespage> as you really just want ovs right? [14:37] <jamespage> not all of the neutron agent scaffolding around it [14:38] <wolverineav> i'll be doing something like the ODL charm which deploys its own virtual switch. I would not be deploying the vanilla OVS [14:39] <ejat> jamespage: openstack-dashboard charm , do i need manually change at the local_setting.py for the keystone host ? [14:39] <jamespage> ejat, no you just add a relation to keystone [14:39] <jamespage> wolverineav, sounds like a neutron-bigswitch charm is the right approach then [14:39] <ejat> already did the relation [14:40] <jamespage> wolverineav, openswitch-odl is the way forward from a frameworks perspective [14:40] <ejat> should i change : [14:40] <ejat> From: [14:40] <ejat> OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member" [14:40] <ejat> To [14:40] <ejat> OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_" [14:40] <jamespage> no [14:40] <ejat> or let as it is [14:40] <jamespage> you should not need to change anything [14:40] <wolverineav> jamespage, so it would be a neutron-api-bigswitch kinda thing. ah, i see [14:40] <jamespage> wolverineav, kinda [14:40] <jamespage> the bit for nova-compute == like openvswitch-odl charm [14:41] <jamespage> the bit for neutron-api == like neutron-api-odl [14:41] <wolverineav> got it [14:41] <ejat> it cant communicate [14:41] <ejat> on azure [14:41] <ejat> from dashboard cant ping keystone [14:41] <ejat> because it take the public dns [14:42] <ejat> how to restart/reboot on of the service machine [14:44] <ejat> jamespage: http://paste.ubuntu.com/12449394/ [14:50] <jamespage> ejat, blimey openstack ontop of azure? [14:51] <ejat> yups .. [14:51] <ejat> demo purpose [14:57] <ejat> jamespage: http://picpaste.com/Screen_Shot_2015-09-18_at_10.56.08_PM-a0Td6kaK.png [15:06] <Slugs_> I’ve followed the Ubuntu openstack single installer guide located here — http://openstack.astokes.org/guides/single-install — Every service statrts up except for Glance - Simplestream Image Sync. This should not hinder me from logging into to horizion but for some reason I can’t authenticate with my username as ubuntu and password I have is ‘openstack’. I have been able to stop the container, start the container, login to the container and [15:06] <Slugs_> check juju logs but I would like some more clairication on this to make sure I’m doing this correctly. [15:16] <jingizu_> Hi all! When I try to juju remove a service (e.g. quantum-gateway, fundamental part of openstack) and re-deploy it to a different server (e.g. --to lxc:3)... I notice that it does remove it from juju, but the actual services themselves (i.e. all the neutron python servers on the origianl host) are still running... it's as if it just removed it from the juju [15:16] <jingizu_> database but did not actually stop the services themselves [15:17] <jingizu_> Of note is that the service is running on a system deployed bare-metal that is also running other juju services, so juju couldn't just tear down the lxc or tell MAAS to kill the bare metal machine altogether (Which obviously would kill the services too) [15:22] <marcoceppi> ejat: what's your question? [15:23] <ejat> openstack-dashboard charm [15:23] <ejat> add-relation putting the public dns for keystone host in dashboard [15:25] <ejat> jamespage: said i should not change anything in local_setting.py [15:26] <jamespage> ejat, sorry - I'm not that familiar with how public dns works in azure; you should not have to change anything in settings.py normally but ymmv on anything other that MAAS (or OpenStack itself) [15:27] <ejat> i cant login the dashboard [15:27] <ejat> openstack.informology.my/horizon [15:33] <ejat> login then timeout [16:00] <firl> thedac: http://paste.ubuntu.com/12450237/ [16:10] <firl> thedac: http://paste.ubuntu.com/12450337/ [17:25] <amit213> <amit213> jingizu_ : on you question about removing service, you'll also have to first do remove-relation on that service (which you're trying to remove) for all its peer services. Once the removal of relation is done, the remove-service should go smoothly. there is also a --force flag that can be used. [17:35] <jingizu_> amit213: Thanks for the reply. At this point I have managed to remove the service(s) in question. Like I mentioned, the service is no longer listed in juju status. However, the underlying programs that correspond to the service are still running... Any ideas why it would not delete said programs, configs, etc. when removing the service? === scuttle` is now known as scuttle|afk [20:38] <ennoble> with juju deployer or with the juju client add_machine call, can you specify a specific machine? [21:02] <firl> ennoble: you can with constraints, and depending on the machined environment you can even use tags === natefinch is now known as natefinch-afk [21:06] <ennoble> firl: I'm using maas. Is it possible to specify a specific machine my-server-1.foo? What about with the manual provider? ssh:root@my-server-that-maas-hates? [21:07] <firl> ennoble: I don’t know anything about the manual provider. here is the tags information https://maas.ubuntu.com/docs/tags.html [21:08] <firl> depending on how many nodes and what not, I sometimes just “acquire” the nodes in MaaS so that I don’t have things deployed to those containers [21:08] <firl> So with juju deployer you can use the tags constraints as with the add machines [21:08] <firl> for service units you can just do a —to [21:09] <ennoble> firl: so you acquire the nodes in maas? add tags to them there, and then deploy to them with juju deployer? [21:09] <firl> “acquire” nodes in maas just means that juju deployer can’t use them to pull from ( it’s a hack I use ) [21:09] <firl> but you can just add tags via the maas cli ( in the maas gui in 1.8 ) to the physical machines and then use the deployer [21:10] <firl> and everyone from ubuntu is probably at a party or traveling because they just finished up a summit in DC [21:27] <nodtkn> ennoble: you can add a machine to any existing enviorment with juju add-machine ssh:ubunut@<hostname> === zz_CyberJacob is now known as CyberJacob [22:04] <ennoble> nodtkn: thanks, I can do that, I'm wondering after I do that can I make juju deployer use it? [22:04] <mwenning> hi, looking for a quick answer - I moved a bundle from one system to another and ran juju-deployer --config=lis-test-bundle.yaml -e maas [22:05] <mwenning> It returned with something about must specify deployment, what did I forget? [22:06] <mwenning> 'Deployment name must be specified' |