UbuntuIRC / 2015 /11 /14 /#juju.txt
niansa
Initial commit
4aa5fce
[00:22] <arosales> any ~charmers around?
[00:22] <arosales> I think most folks have started their weekend
[00:22] <marcoceppi> o/
[00:23] <marcoceppi> blahdeblah: it's failing lint
[00:24] <marcoceppi> DEBUG:runner:call ['/usr/bin/make', '-s', 'lint'] (cwd: /tmp/bundletester-FGMSmT/ntp)
[00:24] <marcoceppi> DEBUG:runner:hooks/ntp_hooks.py:77:80: E501 line too long (97 > 79 characters)
[00:24] <marcoceppi> DEBUG:runner:hooks/ntp_hooks.py:118:80: E501 line too long (90 > 79 characters)
[00:24] <marcoceppi> DEBUG:runner:make: *** [lint] Error 1
[00:24] <marcoceppi> DEBUG:runner:Exit Code: 2
[00:24] <blahdeblah> marcoceppi: thanks - those run *after* the amulet tests?
[00:25] <marcoceppi> blahdeblah: first
[00:25] <marcoceppi> http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-3531
[00:25] <marcoceppi> blahdeblah: that's a better breakdown of that output
[00:26] <blahdeblah> Right - that is much better; I'll get an update to that MP done over the weekend.
[00:28] <arosales> marcoceppi, wow still around :-)
[00:29] <arosales> marcoceppi: seems I can't find the MP for http://review.juju.solutions/review/2342
[00:29] <marcoceppi> arosales: it was deleted
[00:29] <marcoceppi> arosales: I'll remove from queue
[00:30] <arosales> blahdeblah: but looks like the ntp tests pased DEBUG:runner:The ntp deploy test completed successfully.
[00:30] <arosales> marcoceppi: thanks
[00:30] * arosales will move onto the next one
[00:30] <marcoceppi> arosales: removed ;)
[00:31] <blahdeblah> arosales: Yeah - those tests aren't terribly sophisiticated
[00:31] <arosales> well at leasts there is tests
[00:31] <arosales> :-)
[00:32] <marcoceppi> good news is, the tests pass, bad news is pep8 hates you ;)
[00:38] <blahdeblah> There's a way to tell those tests to override on a given line, isn't there?
[00:38] * blahdeblah asks Google
[00:41] <arosales> marcoceppi does charm proof check for pep8?
[00:41] <marcoceppi> arosales: it checks the charm if there's a "lint" target
[00:41] <marcoceppi> the charm author has a make lint target so we run it as part of bundle tester
[00:41] <marcoceppi> so it's basically, bundletester will do the following:
[00:42] <marcoceppi> - charm proof
[00:42] <marcoceppi> - make lint (if available)
[00:42] <marcoceppi> - make test (if available - unit tests)
[00:42] <marcoceppi> - run the charm integration tests
=== med_ is now known as Guest17963
[00:43] <arosales> marcoceppi: ok, thanks
[00:55] <cory_fu> marcoceppi: Have you given any thought to making charm proof wrt. layers?
[00:56] <cory_fu> Charm layers tend to fair ok, but not so much base or interface layers
[00:56] <marcoceppi> cory_fu: I really want to make charm create for layers and charm add
[00:56] <marcoceppi> cory_fu: like charm create layer, charm add layer:nginx. I keep messing up the damn includes syntax like a dope
[00:57] <cory_fu> Agreed
[00:57] <marcoceppi> cory_fu: it's not a bad idea, it's not on the road map for this iteration but could make it on there before EOY
[00:58] * marcoceppi packs up computer for the weekend
[00:58] <cory_fu> T'was just an errant thought
[01:01] <arosales> marcoceppi: For monday, note charm CI is marking charm CI as green even though LXC fails, (aws pass) [ref = http://review.juju.solutions/review/2350]
[01:02] <marcoceppi> arosales: the logic for that might not be nessisarily bad
[01:03] <marcoceppi> do we want to weight failures higher than passes?
[01:03] <marcoceppi> esp. given the flakiness of some of the substrates
[01:03] <marcoceppi> lxc failed because of a provider problem (I restarted the tests)
[01:03] <arosales> one school of thought was that it had to pass on local and public cloud
[01:04] <marcoceppi> arosales: yes, but a failure doesn't always mean it's a charm problem
[01:04] <arosales> in this case the failure is due to timeout, most likey due to infrastructure
[01:04] <arosales> agreed
[01:04] <arosales> but charm CI doesn't tell us why it failed
[01:04] <arosales> just that it failed
[01:04] <marcoceppi> it does tell us
[01:04] <arosales> well doesn't surface up infrastructure or charm fail
[01:04] <marcoceppi> DEBUG:runner:Deployment timed out (900s)
[01:05] <arosales> sorry, I didn't state the correctly
[01:05] <marcoceppi> arosales: the output we link people to is kind of crap
[01:05] <marcoceppi> it's hard to find that
[01:05] <marcoceppi> arosales: I agree we should work to distinguish infrastructure failure vs testing failure
[01:05] <marcoceppi> but we don't have that atm
[01:05] <arosales> but to your point, is it a charm failure or a infrastructure failure
[01:05] <arosales> but regardless
[01:05] <marcoceppi> agent-state-info: lxc container cloning failed
[01:05] <marcoceppi> it was infrastructure
[01:05] <arosales> the question is when do we mark a Charm CI test as a green box, ie passing
[01:05] <marcoceppi> LXC was broken for about 20 test runs because of some weird lingering issue
[01:06] * arosales saw that in a couple of test runs
[01:06] <marcoceppi> arosales: right, and the icon says "some tests have passed" it's never a definitive. It hink we favor passing over failing given how often we have substrate issues
[01:06] <arosales> re my questions when to mark a charm CI as passing I thought it had to pass on local and a cloud
[01:06] <marcoceppi> arosales: we can reverse that logic, without problem, but it needs some discussion
[01:06] <arosales> but it seems currently it marks it as passing if it passes on just 1 cloud
[01:07] <marcoceppi> arosales: at the moment yes, I can see how the logic is confusing there
[01:07] <arosales> I think passing on 1 cloud is fair for green
[01:07] <arosales> but just wanted to confirm my understanding
[01:07] <marcoceppi> as soon as it gets one test result back we say the status, where passing > failing
[01:07] <arosales> oh
[01:07] <marcoceppi> so, it'll say "some tests are passing" for any result that comes back that's testing
[01:07] <arosales> so if it failed on 2 cloud, but passed on 1 it would be red?
[01:07] <marcoceppi> not sure
[01:08] <marcoceppi> I'm doing a terrible job of explaining this
[01:08] <arosales> sorry, I was taking you litterally on passing > failing
[01:08] <arosales> I think I follow you though
[01:08] <marcoceppi> I'm saying pass is weighted greater than failure if there's a mix result
[01:08] <marcoceppi> because of infra flakiness
[01:08] <marcoceppi> but we can easily reverse that logic where fail is if any one test has failed
[01:08] <marcoceppi> I've got to catch a plane so I need to EOD and pack, but we can chat more on Monday
[01:09] <marcoceppi> the new review queue will be a bit better at explaining this
[01:09] <marcoceppi> by just showing the numerical result
[01:09] <marcoceppi> X pass / Y fail
[01:09] <marcoceppi> explicit :)
[01:13] <arosales> I like the weight on passing
[01:14] <arosales> later marcoceppi, travel safely
[01:20] <blahdeblah> marcoceppi: Pushed fix to that MP; does it retry testing automatically?
=== StoneTable is now known as aisrael
=== Tristit1a is now known as Tristitia
=== CyberJacob is now known as Guest72473
[06:32] <aisrael> Anyone had problems with juju under wily not starting?
=== scuttle` is now known as scuttle|afk
=== scuttle|afk is now known as scuttlemonkey
[21:16] <aatchison> i