UbuntuIRC / 2015 /11 /17 /#snappy.txt
niansa
Initial commit
4aa5fce
=== fginther` is now known as fginther
=== vicamo_ is now known as vicamo
=== bpierre_ is now known as bpierre
[07:50] <dholbach> good morning
[08:03] <mvo_> hey dholbach, gooooood monring
[08:03] <dholbach> hey hey :)
[08:14] <fgimenez> good morning
[08:14] <zyga> hey :)
[08:15] <mvo_> hey fgimenez, good morning
[08:15] <mvo_> and hey zyga
[08:15] <mvo_> fgimenez: any luck with my integration test :) ?
[08:16] <fgimenez> hi mvo_ :) the job didn't get triggered, i'll try to dig into it right now
[08:18] <mvo_> fgimenez: ok, let me know if I can help in any way, happy to re-trigger it
[08:20] <fgimenez> mvo_, great thanks, i'll reconfigure it to listen to the repo on which i've been doing the development, where things work well, and try to figure out what are the differences, i've just checked the logs from the last payloads received from github and all seems to be ok
[08:32] <fgimenez> mvo_, the jobs are properly triggered from https://github.com/fgimenez/snappy-github-plugin, both on pull request and comments on them, see http://162.213.34.171:8080/job/github-snappy-integration-tests-cloud/2/console
[08:32] <fgimenez> mvo_, the first error in the log is because there's no bot user configured (the status in the PR won't be updated), but the job is triggered properly
[08:33] <fgimenez> mvo_, it's the same jenkins instance that failed with ubuntu-core/snappy, just modified the job to point to fgimenez/snappy-github-plugin, maybe some config is different in the github side
[08:43] <fgimenez> mvo_, this is how the webhook config looks like in the test repo http://postimg.org/image/486b11ewj/
[08:44] <mvo_> fgimenez: aha, mine was set to json instead of urlencoded
[08:46] <mvo> fgimenez: I changed to urlencoded and triggered the event again, anything interessting in the logs?
[08:47] <fgimenez> mvo_, mmm nope still http://paste.ubuntu.com/13310045/
[08:49] <fgimenez> mvo if you are redelivery maybe the payload is the same as before, i think that the urlencoded is the correct format, we can wait for a new PR
[08:52] <fgimenez> mvo, btw, the jenkins instance project is at https://github.com/ubuntu-core/snappy-jenkins, the readme has info about how it is configured
[08:53] <mvo> fgimenez: ok, I pushed a new branch
[08:55] <mvo> fgimenez: anything in the logs or same message?
[08:55] <fgimenez> mvo, still the same... what are the headers of the last event?
[08:56] <fgimenez> mvo, when it works they look like http://paste.ubuntu.com/13310136/
[08:56] <mvo> fgimenez: its funny, the heade says content-type: application/x-www-form-urlencoded
[08:56] <mvo> fgimenez: but the payload looks like its still json for some reason
[08:56] <mvo> fgimenez:
[08:56] <mvo> 2015-11-17 09:53:26
[08:56] <mvo> looks valid
[09:01] <mvo> fgimenez: hm, recreated it, still no luck. I get a 200 response though
[09:01] <fgimenez> mvo, yes, with urlencoded content-type the payload can have any string, json also, it's like if you paste that json in a textarea and submit the form, if the server is aware of it then it can be decoded
[09:02] <fgimenez> mvo, ok, i'll try to reproduce it with the test repo, first i'll get the full logs from jenkins to see whats really going on
[09:11] <fgimenez> mvo, could you please paste the header and payload of the latest event? i've seen here https://github.com/janinko/ghprb/blob/master/src/main/java/org/jenkinsci/plugins/ghprb/GhprbRootAction.java#L137 that the message from jenkins comes from an unrecognized event
[09:13] <mvo> fgimenez: I send you a /msg with the details
[09:22] <ara> zyga, I like very much how you are explaining every step of the way to implement capabilities in the mailing list
[09:22] <ara> *kuds*
[09:22] <ara> *kudos*, even
[09:22] <zyga> ara: hey
[09:22] <zyga> ara: :D
[09:22] <zyga> ara: thank you, :-)
[09:24] <mvo> fgimenez: thanks so much for your help, now that I configured it right it works like a charm
[09:25] <fgimenez> mvo, np :) thank you
[10:08] <JamesTait> Good morning all; happy Tuesday, and happy Home-Made Bread Day! 😃
[10:13] <mvo> fgimenez: how long do the tests usually run?
[10:14] <mvo> fgimenez: the github-snappy-integration-tests-cloud jobs I mean
[10:15] <fgimenez> mvo, yes, it depends on the cloud load and the network response for updates, an usual run can take about 25min, sometimes longer (up to 1h) or shorter (16 min or so)
[10:16] <mvo> fgimenez: thanks
[10:16] <soffokl> Hey, I trying to disable snappy-autopilot.timer, but after reboot it active again. Is there any way to disable it permanently?
[10:16] <soffokl> (amd64)ubuntu@localhost:~$ sudo systemctl disable snappy-autopilot.timer
[10:16] <soffokl> Removed symlink /etc/systemd/system/multi-user.target.wants/snappy-autopilot.timer.
[10:16] <soffokl> (amd64)ubuntu@localhost:~$ sudo reboot
[10:16] <soffokl> $ ssh 172.16.194.210
[10:16] <soffokl> (amd64)ubuntu@localhost:~$ sudo systemctl list-timers snappy-autopilot.timer
[10:16] <soffokl> NEXT LEFT LAST PASSED UNIT ACTIVATES
[10:16] <soffokl> Tue 2015-11-17 11:00:00 UTC 50min left n/a n/a snappy-autopilot.timer snappy-autopilot.service
[10:18] <fgimenez> mvo, there are a couple of things we can do in the test runner to make it quicker, we can save about 3min for every run in the initial steps, i'll try to prepare a branch shortly
[10:18] <fgimenez> mvo, and we can of course run the suite in parallel, this could give us great time savings
=== chihchun_afk is now known as chihchun
[10:32] <mvo> fgimenez: yay, test finished, one error that may well be a real issue! is there a way for me to see what base image version was used for the test? I wonder if it has the latest apparmor
[10:34] <fgimenez> mvo, sure, at the beginning of the log, look for "Launching instance for Snappy image ubuntu-core/custom/..." the version number is right before "-disk1.img"
[10:34] <mvo> fgimenez: thanks again
[10:37] <fgimenez> mvo, np, btw the tests are running now against the latest available rolling/edge amd64 image, with elopio's work we can trigger them to against bbb in spi, and we could define other jobs for executing in amd64 15.04, for example
[10:38] <fgimenez> mvo, when we will have the snappy-cloud-image in place we will be able to upload new images to the cloud as soon as they are published in system-image, now this process is still manual, let me know if you need a more recent rolling/edge image
[11:55] <longsleep> sergiusens: Hey, i think i just found a bug in u-d-f, it seams that it applies tarballs from the system-image server in random order / tarballs are extracted into basemount in the order that they finished downloading
[12:16] <Chipaca> sergiusens: ^?
[12:16] <sergiusens> longsleep, hmm, you may be right; I'd need to look
[12:16] * sergiusens was having breakfast
[12:16] * Chipaca approves
[12:17] <longsleep> sergiusens: i can add a bug with pointers to the code if you like
[12:17] <sergiusens> longsleep, that would be good, thanks
[12:24] <longsleep> sergiusens: done, see bug 1517009
[12:24] <ubottu> bug 1517009 in goget-ubuntu-touch (Ubuntu) "ubuntu-device-flash applies system image tarballs in random order" [Undecided,New] https://launchpad.net/bugs/1517009
=== nessita_ is now known as nessita
[13:07] <mvo> elopio: when you have a chance, could you update the ubuntu-core xenial base image for the docker integration tests
=== barry` is now known as barry
[14:17] <elopio> mvo: I can, but I'm not sure I'm understanding what you want.
[14:19] <mvo> elopio: sorry, I probably did not express myself well. so … I enabled the webhook to trigger the integration tests on each pull request, fgimenez hold my hand for that :) now I'm very exicted that they run. my understanding is that the base image used needs manual updates(?) and we need the latest xenial image with the latest apparmor for one of the branches
[14:19] <mvo> elopio: the feature/native-security-policy-regen branch
[14:19] <mvo> elopio: does that make more sense now?
[14:20] <elopio> mvo: more or less :) they always run against the latest daily rolling edge, atm #235. So there's nothing to update to.
[14:21] <mvo> elopio: oh, I have 247 here right now?
[14:21] <elopio> mvo: do you mean that you need something that will be in the daily not yet generated?
[14:22] <elopio> mvo: ah, #235 is the bbb I'm testing now.
[14:22] <mvo> elopio: aha, ok. if its always using the latest image I need to ponder why the one test is failing. I was assuming apparmor on the image is out-of-date, it needs the version from ~2-3 days ago
[14:23] <elopio> mvo: do you have a link to the results?
[14:25] <fgimenez> mvo, elopio i think that the latest image uploaded is 229, we still are not able to upload automatically the latest ones, elopio, you can upload it with snappy-cloud-image, having the canonistack instances for the shared user loaded
[14:26] <elopio> fgimenez: ahh.
[14:26] <fgimenez> elopio, i can do it this time and document it somewhere (the snappy-cloud-image readme sounds like a good place :), it will be soon automated
[14:26] <elopio> yes, please.
[14:27] <elopio> sorry mvo for confusing it more.
[14:28] <sergiusens> lool, do you have a minute to discuss plugins, build `drivers` and build tools?
[14:29] <lool> sergiusens: I'm in hangouts for next 90 mn, but then free
[14:29] <mvo> elopio, fgimenez: no worries and thanks for your hard work on this, its a wonderful feature and its great to see it progressing so nicely
[14:35] <sergiusens> lool, great, its about the divide between openjdk, oracle jdk, former bea jdk, ibm's jdk with the combination of make, ant, maven, etc
[14:45] <fgimenez> mvo, elopio ok, 247 is ready, the tests will use it from now on
[14:47] <fgimenez> elopio, with snappy-cloud-image installed from ppa:fgimenez/snappy-cloud-image, just executing snappy-cloud-image -release rolling -channel edge (with the shared user credentials loaded) is enough
[14:48] <fgimenez> elopio, it's takes less time to upload the image if you execute that from a cloud instance
[14:52] <mvo> fgimenez: \o/ I triggered a re-test
[14:53] <mvo> fgimenez: this is real magic, I'm really exicted
[14:54] <fgimenez> mvo, great! :)
[15:28] <zyga> tedg: thank you for the questions
[16:19] <mvo> just fyi (jdstrand will also send a mail about it). the native-security-policygen-regen branch landed that means we no longer use aa-clickhook to generate the security profiles. this *might* introduce bugs but its a really nice cleanup step
[16:26] <elopio> plars: the bbb tests stopped working without any changes to the scripts. Could it be timing out?
[16:27] <plars> elopio: when did they last work?
[16:27] <elopio> plars: tuesday.
[16:27] <elopio> then I fixed some bugs on the tests, and no results_payload anymore.
[16:28] <plars> elopio: did you get any output from them? I don't see a whole except that it successfully booted into the test image, ran some tests, and got a 255 rc
[16:28] <plars> elopio: ah, hang on, maybe more to it... let me dig a little
[16:28] <elopio> what's a 255 rc?
[16:29] <elopio> no output received. results_payload empty.
[16:34] <plars> elopio: it looks like it couldn't connect to the test image after all
[16:34] <plars> elopio: the bbb seems fine, the default image running on emmc works
[16:35] <elopio> plars: should I do something in my script to notice this?
[16:38] <plars> elopio: I'm not sure that you can - I think it should fail before it even gets to your script. Is it possible the image is just busted? Have you tried this locally on a bbb?
[16:38] <elopio> plars: yes, daily.
[16:38] <elopio> it works.
[16:41] <plars> elopio: you say it last worked on tuesday, the 10th?
[16:41] <elopio> plars: yes.
[16:44] <lentzi90> Is there some simple way of getting the IP addres of a beaglebone running snappy ubuntu? I can't find my micro HDMI adapter... and I can't access the router to check it that way.
[16:44] <beuno> lentzi90, it might be on the network as webdm.local
[16:44] <beuno> try pinging that
[16:45] <lentzi90> ok thanks!
[16:46] <jdstrand_> kyrofa: fyi, see mov's comment on the security regen branch landing
[16:47] <kyrofa> mvo excellent news! Thanks for the heads up jdstrand_
=== chihchun is now known as chihchun_afk
[16:59] <plars> elopio: so a couple of things...
[16:59] <plars> elopio: 1. I'm trying a test job on both bbb's we have, to see if one of them is bad or something
[17:01] <plars> elopio: 2. the way I'm trying to detect whether we're in the emmc image, or the test image that we just flashed, or if we didn't manage to get to any image at all isn't great. Pretty much anything I can do to detect if it's in the snappy image we flashed is bound to be fragile, and what that means for right now, on bbb, we are not properly detecting that it
[17:01] <plars> can't reach the bbb after flashing
[17:02] <plars> elopio: what *should* happen is that the provisioning step should fail, and it wouldn't even try to run your test, so there would definitely be no test results in that case, but I think SPI would mark the test failed
[17:06] <plars> elopio: I'll get a fix in for the case where it doesn't detect boot failure to the test image properly, but I'm more concerned with why it's not seeing the test image. I'll let you know what I find out
[17:06] <elopio> plars: thank you.
[17:08] <plars> elopio: I need to look at the x86 hosts also, I couldn't get any of them working last I tried, and I'm concerned that maybe some change to the image broke them. Any thoughts on that? I heard there were going to be some changes but haven't had a chance to check them out yet
[17:08] <elopio> plars: no. I'm basically ignoring the x86 hosts because we can't reboot them.
[17:08] <elopio> that's the useful part of this suite.
[17:10] <plars> elopio: yeah, I have a story in our backlog to explore other options for automating x86... I think we may have to break down and just netboot them
[17:11] <plars> elopio: I was hoping to use some feature of grub that supposedly lets you read a file over http, and use that to give a hint which mode to boot into, but I can't seem to get networking up on grub unless I pxeboot the system
[17:12] <plars> elopio: so it may make more sense to just have it netboot an initrd image, and do everything from there. I think ogra_ was working on something like that for rpi2, and perhaps it would make sense to just do the same for x86
[17:12] <mvo> elopio: I triggered a new xenial image with the new security policy generation code, if you want to play around a bit later and poke at it
[17:13] <plars> elopio: hmm, the first bbb has booted and seems to be running my test. I can connect to it and snappy seems to be running, so I think it *can* work
[17:13] <elopio> plars: I'll launch a new one.
[17:13] <plars> elopio: what are the options you are passing to udf on it?
[17:13] <elopio> mvo: great, thank you!
[17:13] <mvo> elopio, fgimenez: there is also a new initramfs-tools-ubuntu-core with all-snap support it should be fully backward compatible but if you see boot issues in the tests alter me please
[17:14] <elopio> plars: core rolling --channel edge --oem beagleblack --developer-mode
[17:15] <plars> elopio: both of mine ran just fine
[17:15] <elopio> mvo: is it good enough to check for boot problems with systemctl --failed --all }
[17:15] <elopio> ?
[17:18] <plars> elopio: that's the same as mine, just in a different order
[17:20] <fgimenez> mvo, elopio is it included in the latest image?
[17:52] <elopio> plars: same thing, no payload http://10.55.32.109:8080/job/snappy-daily-rolling-bbb/120/console
[18:07] <plars> elopio: ok, so it looks like the flash went just fine, and I can even reach the image. If you are sure about that image id though, there should definitely be something there
[18:07] <plars> elopio: the test failed, but spi should still fill in something
[18:08] <plars> elopio: that id looks completely empty to me - as if it doesn't even exist
[18:08] <plars> elopio: is that what you're seeing?
[18:08] <plars> elopio: give me a bit to grab some lunch and I'll take a closer look
[18:09] <elopio> plars: do you mean "87fd5a8b-c9be-4a92-86ac-c3b6f0496f8d" ?
[18:09] <plars> elopio: yes
[18:09] * elopio afk too.
[18:09] <mvo> elopio: I think nothing should break
[18:10] <elopio> plars: no, for that id I see 'test_status': 'FAILED', but 'result_payload': {}
[18:11] <plars> elopio: weird, I see nothing, but maybe that's because of my credentials?
[18:11] <plars> elopio: well, the reason you see nothing in the payload then would be because your script didn't put anything there
[18:11] <elopio> plars: can be. Take a look at that jenkins output, it pings until the results appear, after like 44 minutes.
[18:11] <plars> the failed status would probably come from spi itself
[18:12] <plars> I'll try to reproduce the steps by hand and see if I can spot something
[18:12] <elopio> plars: right, what I'm wondering is why it was putting stuff last tuesday, but then after no changes it's now empty. My first suspicion was a timeout.
[18:12] <plars> let me grab food first though
[18:48] <plars> elopio: ok, I found the problem
[18:48] <plars> results=cat: restuls/output/artifacts/results.subunit: No such file or directory
[18:49] <plars> elopio: also, would you mind cleaning up the tmpdirs you create? since they are not in the SPI generated path, they won't get automatically cleaned up
[18:49] <plars> elopio: an easy way you can do it (which would also facilitate easier debugging) is to do a static path, and just rm it before starting
[18:49] <plars> elopio: ex: rm -rf /tmp/elopio && mkdir /tmp/elopio
[19:16] <elopio> plars: I do this: results=$(cat restuls/output/artifacts/results.subunit 2>&1)
[19:16] <elopio> so if the file doesn't exist, it will assign the error to results.
[19:17] <elopio> it should end in the json anyway.
[19:20] <elopio> I added a trap and corrected the typo. Lets see what happens now.
[19:21] <plars> elopio: hmm, I think you'll hit problems when writing the json, because ${results} doesn't have anything
[19:21] <plars> elopio: anyway, I spotted another problem, you're json is missing a comma, so I think it's going to choke on that also
[19:21] <plars> elopio: on the summary line
[19:23] <elopio> gagh, I hate json.
[19:23] <elopio> comma added.
[19:23] <elopio> I have no idea how it worked on tuesday then. must have changed it by mistake.
[19:24] <elopio> plars: so, for dumb people like me, the output of this script is vital
[19:25] <elopio> I can't trust myself to write a script that collect all its own errors.
[19:25] <plars> elopio: there was no output from your script that helped me either, I only got that far by running it by hand and adding set -x
[19:26] <elopio> plars: that's the output I want, the one from -x.
[19:26] <plars> elopio: I am capturing what the script exposes as best I can in logstash, but that's not always reliable, and spi doesn't seem to gather anything on its own
[19:26] <elopio> I removed it because I could not access it.
[19:28] <plars> elopio: sadly, that system where I run logstash/kibana is not something you can get to through the vpn right now, I've already got a ticket in with IS to resolve that, so we'll see how it goes.
[19:29] <plars> elopio: if they can get us hooked up to do that, then I'll be able to get you pretty easy access to the logs coming out of it, but it wouldn't have been helpful here I think
[19:30] <elopio> plars: that would be nice so I stop bothering you.
=== chihchun_afk is now known as chihchun
[19:54] <ksuttle> Hey there. Where can I report bugs for apt-get?
[19:57] <ksuttle> https://bugs.launchpad.net/ubuntu ?
[20:03] <genii> ksuttle: ubuntu-bug apt
[20:06] <ksuttle> Thanks
=== chihchun is now known as chihchun_afk
[20:40] <jdstrand_> niemeyer1: hey, you still around?
=== jdstrand_ is now known as jdstrand
[20:41] <jdstrand> niemeyer1: wondering if you could fast track https://github.com/ubuntu-core/snappy/pull/112
[20:42] <jdstrand> niemeyer1: it is just a doc change. I was writing the announcement of the policy generation branch and noticed that the docs weren't quite right
[20:42] <niemeyer1> jdstrand: Will check it out
[20:44] <jdstrand> thanks
[21:53] <plars> Has there been any change to how you write an image to the hard disk if you want to boot from there? (x86)
[21:53] <plars> I used to be able to dd it, but that doesn't seem to work now
[21:53] <plars> elopio: do you know? ^
[22:39] <elopio> plars: not that I know.