Karl Dubost: CSS prefixes and gzip compression |
I was discussing with Mike how some Web properties are targeting only WebKit/Blink browsers (for their mobile sites) to the point that they do not add the standard properties for certain CSS features. We see that a lot in Japan, for example, but not only.
We often see things like this code:
.nBread{ min-height: 50px; display: -webkit-box; -webkit-box-align: center; -webkit-box-pack: center; padding-bottom: 3px; }
which is easily fixed by just adding the necessary properties:
.nBread{ min-height: 50px; display: -webkit-box; -webkit-box-align: center; -webkit-box-pack: center; padding-bottom: 3px; display: flex; align-items: center; justify-content: center; }
It would make the Web site more future resilient too.
Adding standard properties costs a couple of bytes more in the CSS. Mike wondered if the compression would be interesting when it's about adding the standard property because of compression patterns:
#foo { -webkit-box-shadow: 1px 1px 1px red; box-shadow: 1px 1px 1px red; }
It seems to be working. With Mike's idea I was wondering if the order was significative. So I tested by adding additional properties and changing the order
mike.prefix.css
#foo { background-color: #fff; -webkit-box-shadow: 1px 1px 1px red; }
mike.both.css
#foo { background-color: #fff; -webkit-box-shadow: 1px 1px 1px red; box-shadow:1px 1px 1px red; }
mike.both-order.css
#foo { -webkit-box-shadow: 1px 1px 1px red; background-color: #fff; box-shadow:1px 1px 1px red; }
then doing similar tests than Mike.
Obviously the order matters, because it helps gzip to find text patterns to compress.
gzip -c mike.prefix.css | wc -c
gzip -c mike.both.css | wc -c
gzip -c mike.both-order.css | wc -c
For things like -webkit-
flexbox and gradients, it doesn't help very much, because the syntaxes are very different (see the first piece of code in this post), but for properties were the standard properties is just about removing the prefix, the order matters. It would be interesting to test that on real long CSS files and not just a couple of properties.
Otsukare!
|
Mozilla Addons Blog: De-coupling Reviews from Signing Unlisted Add-ons |
tl;dr – By the end of this week (December 4th), we plan to completely automate the signing of unlisted add-ons and remove the trigger for manual reviews.
Over the past few days, there have been discussions around the first step of the add-on signing process, which involves a programmatic review of submissions by a piece of code known as the “validator”. The validator can trigger a manual review of submissions for a variety of reasons and halt the signing process, which can delay the release of an add-on because of the signing requirement that will be enforced in Firefox 43 and later versions.
There has been debate over whether the validator is useful at all, since it is possible for a malicious player to write code that bypasses it. We agree the validator has limitations; the reality is we can only detect what we know about, and there’s an awful lot we don’t know about. But the validator is only one component of a review process that we hope will make it easier for developers to ship add-ons, and safer for people to use them. It is not meant to be a catch-all malware detection utility; rather, it is meant to help developers get add-ons into the hands of Firefox users more expediently.
With that in mind, we are going to remove validation as a gating mechanism for unlisted add-ons. We want to make it easier for developers to ship unlisted add-ons, and will perform reviews independently of any signing process. By the end of this week (December 4th), we plan to completely automate the signing of unlisted add-ons and remove the trigger for manual reviews. This date is contingent on how quickly we can make the technical, procedural, and policy changes required to support this. The add-ons signing API, introduced earlier this month, will allow for a completely automated signing process, and will be used as part of this solution.
We’ll continue to require developers to adhere to the Firefox Add-ons policies outlined on MDN, and would ask that they ensure their add-ons conform to those polices prior to submitting them for signing. Developers should also be familiar with the Add-ons Reviewer Guide, which outlines some of the more popular reasons an add-on would fail a review and be subject to blocklisting.
I want to thank everyone for their input and insights over the last week. We want to make sure the experience with Firefox is as painless as possible for Add-on developers and users, and our goals have never included “make life harder”, even if it sometimes seems that way. Please continue to speak out, and feel free to reach out to me or other team members directly.
I’ll post a more concrete overview of the next steps as they’re available, and progress will be tracked in bug 1229197. Thanks in advance for your patience.
kev
https://blog.mozilla.org/addons/2015/12/01/de-coupling-reviews-from-signing-unlisted-add-ons/
|
Chris AtLee: MozLando Survival Guide |
I thought I would share a few tips I've learned over the years of how to make the most of these company gatherings. These summits or workweeks are always full of awesomeness, but they can also be confusing and overwhelming.
It's great to have a (short!) list of people you'd like to see in person. Maybe somebody you've only met on IRC / vidyo or bugzilla?
Having a list of people you want to say "thank you" in person to is a great way to approach this. Who doesn't like to hear a sincere "thank you" from someone they work with?
I don't know about you, but I can find it pretty challenging at times to get my ideas across in IRC or on an etherpad. It's so much easier in person, with a pad of paper or whiteboard in front of you. You can share ideas with people, and have a latency/lag-free conversation! No more fighting AV issues!
A week of full days of meetings, code sprints, and blue sky dreaming can be really draining. Don't feel bad if you need to take a breather. Go for a walk or a jog. Take a nap. Read a book. You'll come back refreshed, and ready to engage again.
That's it!
I look forward to seeing you all next week!
|
Air Mozilla: Webdev Extravaganza: December 2015 |
Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.
|
Chris H-C: To-Order Telemetry Dashboards: dashboard-generator |
Say you’ve been glued to my posts about Firefox Telemetry. You became intrigued by the questions you could answer and ask using actual data from actual users, and considered writing your own website using the single-API telemetry-wrapper.
However, you aren’t a web developer. You don’t like JavaScript. Or you’re busy. Or you don’t like reading READMEs on GitHub.
This is where dashboard-generator can step in to help out. Simply visit the website and build-your-own dash to your exacting specifications:
Choose your channel, version, and metric. “-Latest-” will ensure that the generated dashboard will always use the latest version in the selected channel when you reload that page. Otherwise, you might find yourself always looking at GC_MS values from beta 39.
If you are only interested in clients reporting from a particular application, operating system, or with a certain E10s setting then make your choices in Filters.
If you want a histogram like telemetry.mozilla.org’s “Histogram Dashboard” then make sure you select Histogram and then choose if you want the ends of the histogram trimmed, whether (and how sensibly) you want to compare clients across particular settings, and whether to sanitize the results so you only use data that is valid and has a lot of samples.
If you want an evolution plot like telemetry.mozilla.org’s “Evolution Dashboard” then select Evolution. From there, choose whether to use the build date or submission date of samples, how many versions back from the selected one you would like to graph the values over, and whether to sanitize the results so you only use data that is valid and has a lot of samples.
Your choices made, click “Add to Dashboard”. Then choose again! And again!
Make a mistake? Don’t worry, you can remove rows using the ‘-‘ buttons.
Not sure what it’ll look like when you’re done? Hit ‘Generate Dashboard’ and you’ll get a preview in CodePen showing what it will look like and giving you an opportunity to fiddle with the HTML, CSS, and JS.
When you see something you like in the CodePen, hit ‘Save’ and it’ll give you a URL you can use to collaborate with others, and an option to ‘Export’ the whole site for when you want to self-host.
If you find any bugs or have any requests, please file an issue ticket here. I’ll be using it to write an E10s dashboard in the near term, and hope you’ll use it, too!
:chutten
https://chuttenblog.wordpress.com/2015/12/01/to-order-telemetry-dashboards-dashboard-generator/
|
Mozilla Fundraising: Mozilla’s New Donation Form Features |
https://fundraising.mozilla.org/mozillas-new-donation-form-features/
|
Jan de Mooij: Testing Math.random(): Crushing the browser |
(For tl;dr, see the Conclusion.)
A few days ago, I wrote about Math.random() implementations in Safari and (older versions of) Chrome using only 32 bits of precision. As I mentioned in that blog post, I've been working on upgrading Math.random() in SpiderMonkey to XorShift128+. V8 has been using the same algorithm since last week. (Update Dec 1: WebKit is now also using XorShift128+!)
The most extensive RNG test is TestU01. It's a bit of a pain to run: to test a custom RNG, you have to compile the library and then link it to a test program. I did this initially for the SpiderMonkey shell but after that I thought it'd be more interesting to use Emscripten to compile TestU01 to asm.js so we can easily run it in different browsers.
Today I tried this and even though I had never used Emscripten before, I had it running in the browser in less than an hour. Because the tests can take a long time, it runs in a web worker. You can try it for yourself here.
I also wanted to test window.crypto.getRandomValues() but unfortunately it's not available in workers.
Disclaimer: browsers implement Math functions like Math.sin differently and this can affect their precision. I don't know if TestU01 uses these functions and whether it affects the results below, but it's possible. Furthermore, some test failures are intermittent so results can vary between runs.
TestU01 has three batteries of tests: SmallCrush, Crush, and BigCrush. SmallCrush runs only a few tests and is very fast. Crush and especially BigCrush have a lot more tests so they are much slower.
Running SmallCrush takes about 15-30 seconds. It runs 10 tests with 15 statistics (results). Here are the number of failures I got:
Browser | Number of failures |
---|---|
Firefox Nightly | 1: BirthdaySpacings |
Firefox with XorShift128+ | 0 |
Chrome 48 | 11 |
Safari 9 | 1: RandomWalk1 H |
Internet Explorer 11 | 1: BirthdaySpacings |
Edge 20 | 1: BirthdaySpacings |
Chrome/V8 failing 11 out of 15 is not too surprising. Again, the V8 team fixed this last week and the new RNG should pass SmallCrush.
The Crush battery of tests is much more time consuming. On my MacBook Pro, it finishes in less than an hour in Firefox but in Chrome and Safari it can take at least 2 hours. It runs 96 tests with 144 statistics. Here are the results I got:
Browser | Number of failures |
---|---|
Firefox Nightly | 12 |
Firefox with XorShift128+ | 0 |
Chrome 48 | 108 |
Safari 9 | 33 |
Internet Explorer 11 | 14 |
XorShift128+ passes Crush, as expected. V8's previous RNG fails most of these tests and Safari/WebKit isn't doing too great either.
BigCrush didn't finish in the browser because it requires more than 512 MB of memory. To fix that I probably need to recompile the asm.js code with a different TOTAL_MEMORY value or with ALLOW_MEMORY_GROWTH=1.
Furthermore, running BigCrush would likely take at least 3 hours in Firefox and more than 6-8 hours in Safari, Chrome, and IE, so I didn't bother.
The XorShift128+ algorithm being implemented in Firefox and Chrome should pass BigCrush (for Firefox, I verified this in the SpiderMonkey shell).
I noticed Firefox (without XorShift128+) and Internet Explorer 11 get very similar test failures. When running SmallCrush, they both fail the same BirthdaySpacings test. Here's the list of Crush failures they have in common:
This suggests the RNG in IE may be very similar to the one we used in Firefox (imported from Java decades ago). Maybe Microsoft imported the same algorithm from somewhere? If anyone on the Chakra team is reading this and can tell us more, it would be much appreciated :)
IE 11 fails 2 more tests that pass in Firefox. Some failures are intermittent and I'd have to rerun the tests to see if these failures are systematic.
Based on the SmallCrush results I got with Edge 20, I think it uses the same algorithm as IE 11 (not too surprising). Unfortunately the Windows VM I downloaded to test Edge shut down for some reason when it was running Crush so I gave up and don't have full results for it.
I used Emscripten to port TestU01 to the browser. Results confirm most browsers currently don't use very strong RNGs for Math.random(). Both Firefox and Chrome are implementing XorShift128+, which has no systematic failures on any of these tests.
Furthermore, these results indicate IE and Edge may use the same algorithm as the one we used in Firefox.
http://jandemooij.nl/blog/2015/11/30/testing-math-random-crushing-the-browser/
|
The Servo Blog: This Week In Servo 43 |
In the last two weeks, we landed 165 PRs in the Servo organization’s repositories.
The huge news from the last two weeks is that after some really serious efforts from across the team and community to handle the libc
changes required, we’ve upgraded Rust compiler versions! This change is more exciting than usual because it switches us from our custom Rust compiler and onto the nightlies produced by the Rust team. The following upgrade was really quick!
Now that we have separate support for making try
builds, we have added dzbarsky, ecoal95, KiChjang, ajeffrey, and waffles. Please nominate your local friendly contributor today!
currentColor
for CanvasScreencast of this post being submitted to Hacker News:
At the meeting two weeks ago we discussed intermittent test failures, using a mailing lists vs. discourse, the libcpocalypse, and our E-Easy issues. There was no meeting last week.
|
Air Mozilla: Mozilla Weekly Project Meeting, 30 Nov 2015 |
The Monday Project Meeting
https://air.mozilla.org/mozilla-weekly-project-meeting-20151130/
|
Kartikaya Gupta: Asynchronous scrolling in Firefox |
In the Firefox family of products, we've had asynchronous scrolling (aka async pan/zoom, aka APZ, aka compositor-thread scrolling) in Firefox OS and Firefox for Android for a while - even though they had different implementations, with different behaviors. We are now in the process of taking the Firefox OS implementation and bringing it to all our other platforms - including desktop and Android. After much hard work by many people, including but not limited to :botond, :dvander, :mattwoodrow, :mstange, :rbarker, :roc, :snorp, and :tn, we finally have APZ enabled on the nightly channel for both desktop and Android. We're working hard on fixing outstanding bugs and getting the quality up before we let it ride the trains out to DevEdition, Beta, and the release channel.
If you want to try it on desktop, note that APZ requires e10s to be enabled, and is currently only enabled for mousewheel/trackpad scrolling. We do have plans to implement it for other input types as well, although that may not happen in the initial release.
Although getting the basic machinery working took some effort, we're now mostly done with that and are facing a different but equally challenging aspect of this change - the fallout on web content. Modern web pages have access to many different APIs via JS and CSS, and implement many interesting scroll-linked effects, often triggered by the scroll event or driven by a loop on the main thread. With APZ, these approaches don't work quite so well because inherently the user-visible scrolling is async from the main thread where JS runs, and we generally avoid blocking the compositor on main-thread JS. This can result in jank or jitter for some of these effects, even though the main page scrolling itself remains smooth. I picked a few of the simpler scroll effects to discuss in a bit more detail below - not a comprehensive list by any means, but hopefully enough to help you get a feel for some of the nuances here.
Smooth scrolling
Smooth scrolling - that is, animating the scroll to a particular scroll offset - is something that is fairly common on web pages. Many pages do this using a JS loop to animate the scroll position. Without taking advantage of APZ, this will still work, but can result in less-than-optimal smoothness and framerate, because the main thread can be busy with doing other things.
Since Firefox 36, we've had support for the scroll-behavior CSS property which allows content to achieve the same effect without the JS loop. Our implementation for scroll-behavior without APZ enabled still runs on the main thread, though, and so can still end up being janky if the main thread is busy. With APZ enabled, the scroll-behavior implementation triggers the scroll animation on the compositor thread, so it should be smooth regardless of load on the main thread. Polyfills for scroll-behavior or old-school implementations in JS will remain synchronous, so for best performance we recommend switching to the CSS property where possible. That way as APZ rolls out to release, you'll get the benefits automatically.
Here is a simple example page that has a spinloop to block the main thread for 500ms at a time. Without APZ, clicking on the buttons results in a very janky/abrupt scroll, but with APZ it should be smooth.
position:sticky
Another common paradigm seen on the web is "sticky" elements - they scroll with the page for a bit, and then turn into position:fixed elements after a point. Again, this is usually implemented with JS listening for scroll events and updating the styles on the elements based on the scroll offset. With APZ, scroll events are going to be delayed relative to what the user is seeing, since the scroll events arrive on the main thread while scrolling is happening on the compositor thread. This will result in glitches as the user scrolls.
Our recommended approach here is to use position:sticky when possible, which we have supported since Firefox 32, and which we have support for in the compositor. This CSS property allows the element to scroll normally but take on the behavior of position:fixed beyond a threshold, even with APZ enabled. This isn't supported across all browsers yet, but there are a number of polyfills available - see the resources tab on the Can I Use position:sticky page for some options.
Again, here is a simple example page that has a spinloop to block the main thread for 500ms at a time. With APZ, the JS version will be laggy but the position:sticky version should always remain in the right place.
Parallax
Parallax. Oh boy. There's a lot of different ways to do this, but almost all of them rely on listening to scroll events and updating element styles based on that. For the same reasons as described in the previous section, implementations of parallax scrolling that are based on scroll events are going to be lagging behind the user's actual scroll position. Until recently, we didn't have a solution for this problem.
However, a few days ago :mattwoodrow landed compositor support for asynchronous scroll adjustments of 3D transforms, which allows a pure CSS parallax implementation to work smoothly with APZ. Keith Clark has a good writeup on how to do this, so I'm just going to point you there. All of his demo pages should scroll smoothly in Nightly with APZ enabled.
Unfortunately, it looks like this CSS-based approach may not work well across all browsers, so please make sure to test carefully if you want to try it out. Also, if you have suggestions on other methods on implementing parallax so that it doesn't rely on a responsive main thread, please let us know. For example, :mstange created one at http://tests.themasta.com/transform-fixed-parallax.html which we should be able to support in the compositor without too much difficulty.
Other features
I know that there are other interesting scroll-linked effects that people are doing or want to do on the web, and we'd really like to support them with asynchronous scrolling. The Blink team has a bunch of different proposals for browser APIs that can help with these sorts of things, including things like CompositorWorker and scroll customization. For more information and to join the discussion on these, please see the public-houdini mailing list. We'd love to get your feedback!
(Thanks to :botond and :mstange for reading a draft of this post and providing feedback.)
|
Gijs Kruitbosch: Did it land? |
I wrote a thing to check if your patch landed/stuck. It’s on github because that’s what people seem to do these days. That means you can use it here:
The “point” of this mini-project is to be able to easily determine whether bug X made today’s nightly, or if bug Y landed in beta 5. Sometimes non-graph changelogs, such as are most accessible on hgweb, can be misleading (ie beta 5 was tagged after you landed, but on a revision before you landed…), plus it’s boring to look up revisions manually in a bug, and then look them up on hgweb, and then try to determine if revision A is in the ancestry tree for revision B. So I automated it.
Note that the tool doesn’t:
|
Andreas Tolfsen: WebDriver update from TPAC 2015 |
I came back from the TPAC (the W3C’s Technical Plenary/Advisory Committee meeting week) earlier this month, where I attended the Browser Tools- and Testing Working Group’s meetings on WebDriver.
Unlike previous meetings, this was the first time we had a reasonably up-to-date specification text to discuss. That was clearly not a bad idea to have because we were able to make some defining decisions on long-standing, controversial topics. This shows how important it is for assigned action items to be completed in time before a specification meeting, and to have someone with time dedicated to working on the spec.
The WG decided to punt the element visibility, or “displayedness” concept, to level 2 of the specification and in the meantime push for better visibility primitives in the platform. I’ve previously outlined in detail the reasons why it’s not just a bad idea—but impossible—for WebDriver to specify this concept. Instead we will provide a non-normative description of Selenium’s visibility atom in an appendix to give some level of consistency for implementors.
Fortunately Selenlium’s visibility approximation atom can be implemented entirely in content JavaScript, which means it can be provided in both client bindings and as extension commands.
This does not mean we are giving up on visibility. There is general agreement in the WG that it is a desirable feature, but since it’s impossible to define naked eye visibility using existing platform APIs we call upon other WGs to help outline this. Visibility of elements in viewport is not a primitive that naturally fits within the scope of WebDriver.
Our decision has implications for element interactability, which is used to determine if you can interact with an element. This previously relied on the element visibility algorithm, but as an alternative to the tree traversal visibility algorithm we dismissed, we are experimenting with a somewhat na"ive hit-testing alternative that takes the centre coordinates of the portion of the element inside the viewport and calls elementsAtPoint, ignoring elements that are opaque.
We had previously decided to make two separate commands for getting attributes and properties. This was controversial because it deviates from the behaviour of Selenium’s getAttribute, that conflates the DOM concepts of attributes and properties.
Because the WG decided to stick with David Burns’s proposal on special-casing boolean attributes, the good news is that the Selenium behaviour can be emulated using WebDriver primitives.
In practice this means that when
Get Element Attribute
is called for an element that carries a boolean attribute,
this will return a string "true
",
rather than the DOM attribute value which would normally be an empty string.
We return a string so that dynamically typed programming languages
can evaluate this into something truthful,
and because there is a belief in the WG
that an empty string return value for e.g. ,
would be confusing to users.
Because we don’t know which attributes are boolean attributes from the DOM’s point of view, it’s not the cleanest approach since it means we must maintain a hard-coded list in WebDriver. It will also arguably cause problems for custom elements, because it is not given that they mirror the default attribute values.
One of the requirements for moving to REC is writing a decent test suite. WebDriver is in the fortunate position that it’s an evolution of existing implementations, each with their own body of tests, many of whom we can probably re-purpose. One of the challenges with the existing tests is that the harness does not easily allow for testing the lower level details of the protocol.
So far I have been able to make a start with merging Microsoft’s pending pull requests. Not all the tests merged match what the specification mandates any longer, but we decided to do this before any substantial harness work is done, to eliminate the need for Microsoft to maintain their own fork of Web Platform Tests.
Microsoft and Mozilla are both working on implementations, so there is a pressing need for a test suite that reflects the realities of the specification. Vital chapters, such as Element Retrieval and Interactions, are either undefined or in such a poor state that they should be considered unimplementable.
Despite these reservations, I’d say the WebDriver spec is in a better state than ever before. At TPAC we also had meetings about possible future extensions, including permissions and how WebDriver might help facilitate testing of WebBluetooth as well as other platform APIs.
The WG is concurrently pushing for WebDriver to be used in Web Platform Tests to automate the “non-automatable” test cases that require human interaction or privileged access. In fact, there’s an ongoing Quarter of Contribution project sponsored by Mozilla to work on facilitating WebDriver in a sort of “meta-circular” fashion, directly from testharness.js tests.
But more on that later. (-:
|
This Week In Rust: This Week in Rust 107 |
Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.
This week's edition was edited by: nasa42, brson, and llogiq.
From
and Into
traits.69 pull requests were merged in the last week.
See the triage digest and subteam reports for more details.
#[deprecated]
to #[rustc_deprecated]
.macro undefined
error message.rustc::metadata
to a rustc_metadata crate.#[staged_api]
.Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:
IndexAssign
trait that allows overloading "indexed assignment" expressions like a[b] = c
.alias
attribute to #[link]
and -l
.If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.
Tweet us at @ThisWeekInRust to get your job offers listed here!
This week's Crate of the Week is Chrono, a crate that offers very handy timezone-aware Duration
and Date
/Time
types.
Thanks to Ygg01 for the suggestion. Submit your suggestions for next week!
http://this-week-in-rust.org/blog/2015/11/30/this-week-in-rust-107/
|
Emma Irwin: Revisiting the Word ‘Recognition’ in #FOSS and the Dream of Open Credentials |
I think a lot about ways we can better surface Participation as real-world offering for professional and personal development.
And this tweet from Laura triggered all kinds of thinking.
Check out this @BryanMMathers and @dajbelshaw on why open source needs open badges: https://t.co/9By0pyiCd0 @opensourceway
— Laura Hilliger (@epilepticrabbit) November 27, 2015
Most thinking was reminiscent at first.
Working on open projects teaches relevant skills, helps establish mentorship relationships and surfaces hidden strengths and talents. It’s my own story.
And then reflective..
The reason we’ve struggled to make participation a universally recognized opportunity for credential building, is our confusion over the term ‘recognition’. In Open Source we use this term to mean of similar, yet entirely different meanings:
* Gratitude (“hey thanks for that !”)
* You’re making progress (“great work, keep going! “)
* Appreciation (“we value you”)
* You completed or finished something (congratulations you did it!)
In my opinion, many experiments with badges for FOSS participation have actually compounded the problem: If I am issued a badge I didn’t request( and I have many of these) , or don’t value ( I have many of these too) we’re using the process as a prod and not as a genuine acknowledgement of accomplishment. That’s OK, gamification is OK – but it’s not credential building in the real-world sense, we need to separate these two ‘use cases’ to move forward with open credentials.
And I kept thinking…
The Drupal community already does a good job at helping people surface real-world credentials. Drupal.org member profiles expose contribution and community leadership, while business profiles demonstrate (and advertise) their commitment through project sponsorship, and contribution. Drupal also has this fantastic series of project ladders which I’ve always thought would be a great way to experiment with badges, designing connected learning experiences through participation. Drupal ladders definitely inspired my own work with around a ‘Participation Standard‘ , and I wonder how projects can work together a bit more on defining a standard for ‘Distributed Recognition’ even between projects like Mozilla, Drupal and Fedora.
@sunnydeveloper oh I agree! Drupal has its own special benefits from this too, around distributed recognition of contribution /@dajbelshaw
— Rachel Lawson (@rachel_norfolk) November 27, 2015
And the relentless thinking continued…
@makerbase has potential to profile FOSS communities, but without manual-additions being the only way to add contributors. — Emma Irwin (@sunnydeveloper) November 28, 2015
@sunnydeveloper we are definitely thinking about that! /cc @amateurhuman
— Anil Dash (@anildash) November 28, 2015
I then posed the question in our Discourse — asking what ‘Open Credentials’ could look like for Participation at Mozilla . And there are some great responses so far, including solutions like Makerbase and reminder of of how hard it current is to be ‘seen’ in the Mozilla community, and thus how important this topic actually is.
And the thinking will continue, hopefully as a growing group ….
What I do know is that we have to stop using the word recognition as the catch all, and that there is huge opportunity to build Open Credentials through Participation and leadership framework might be a way to test what that looks like.
If you have opinions – would love to have you join our discussion thread!
image by jingleslenobel CC by-NC-ND 2.0
|
Robert O'Callahan: Even More rr Replay Performance Improvements! |
While writing my last blog post I realized I should try to eliminate no-op reschedule events from rr traces. The patch turned out to be very easy, and the results are impressive:
Now replay is faster than recording in all the benchmarks, and for Mochitest is about as fast as normal execution. (As discussed in my previous post, this is probably because the replay excludes some code that runs during normal execution: the test harness and the HTTP server.) Hopefully this turns into real productivity gains for rr users.
http://robert.ocallahan.org/2015/11/even-more-rr-replay-performance.html
|
Adam Roach: Better Living through Tracking Protection |
http://sporadicdispatches.blogspot.com/2015/11/better-living-through-tracking.html
|
John O'Duinn: “Distributed” ER#3 now available! |
Earlier this week, just before the US Thanksgiving holidays, we shipped Early Release #3 for my “Distributed” book-in-progress.
Early Release #3 (ER#3) adds two new chapters: Ch.1 remoties trends, Ch.2 the real cost of an office, and many tweaks/fixes to the previous Chapters. There are now a total of 9 chapters available (1,2,4,6,7,8,10,13,15) arranged into three sections. These chapters were the inspiration for recent presentations and blog posts here, here and here.)
ER#3 comes one month after ER#2. You can buy ER#3 by clicking here, or clicking on the thumbnail of the book cover. Anyone who already has ER#1 or ER#2 should get prompted with a free update to ER#3. (If you don’t please let me know!). And yes, you’ll get updated when ER#4 comes out next month.
Please let me know what you think of the book so far. Your feedback get to help shape/scope the book! Is there anything I should add/edit/change? Anything you found worked for you, as a “remotie” or person in a distributed team, which you wish you knew when you were starting? If you were going to setup a distributed team today, what would you like to know before you started?
Thank you to everyone who’s already sent me feedback/opinions/corrections – all changes that are making the book better. I’m merging changes/fixes as fast as I can – some days are fixup days, some days are new writing days. All great to see coming together. To make sure that any feedback doesn’t get lost or caught in spam filters, it’s best to email a special email address (feedback at oduinn dot com) although feedback via twitter and linkedin works also. Thanks again to everyone for their encouragement, proof-reading help and feedback so far.
Now, it’s time to get back to typing. ER#4 is coming soon!
John.
http://oduinn.com/blog/2015/11/28/distributed-er3-now-available/
|
Robert O'Callahan: rr Replay Performance Improvements |
I've been spending a lot of time using rr, as have some other Mozilla developers, and it occurred to me a small investment in speeding up the debugging experience could pay off in improved productivity quite quickly. Until recently no-one had ever really done any work to speed up replay, so there was some low-hanging fruit.
During recording we avoid trapping from tracees to the rr process for common syscalls (read, clock_gettime and the like) with an optimization we call "syscall buffering". The basic idea is that the tracee performs the syscall "untraced", we use a seccomp-bpf predicate to detect that the syscall should not cause a ptrace trap, and when the syscall completes the tracee copies its results to a log buffer. During replay we do not use seccomp-bpf; we were using PTRACE_SYSEMU to generate a ptrace trap for every syscall and then emulating the results of all syscalls from the rr process. The obvious major performance improvement is to avoid generating ptrace traps for buffered syscalls during replay, just as we do during recording.
This was tricky to do while preserving our desired invariants that control flow is identical between recording and replay, and data values (in application memory and registers) are identical at all times. For example consider the recvmsg system call, which takes an in/out msg parameter. During recording syscall wrappers in the tracee would copy msg to the syscall log buffer, perform the system call, then copy the data from the log buffer back to msg. Hitherto, during replay we would trap on the system call and copy the saved buffer contents for that system call to the tracee buffer, whereupon the tracee syscall wrappers would copy the data out to msg. To avoid trapping to rr for a sequence of such syscalls we need to copy the entire syscall log buffer to the tracee before replaying them, but then the syscall wrapper for recvmsg would overwrite the saved output when it copies msg to the buffer! I solved this, and some other related problems, by introducing a few functions that behave differently during recording and replay while preserving control flow and making sure that register values only diverge temporarily and only in a few registers. For this recvmsg case I introduced a function memcpy_input_parameter which behaves like memcpy during recording but is a noop during replay: it reads a global is_replay flag and then does a conditional move to set the source address to the destination address during replay.
Another interesting problem is recapturing control of the tracee after it has run a set of buffered syscalls. We need to trigger some kind of ptrace trap after reaching a certain point in the syscall log buffer, without altering the control flow of the tracee. I handled this by generating a large array of stub functions (each only one byte, a RET instruction) and after processing the log buffer entry ending at offset O, we call stub function number O/8 (each log record is at least 8 bytes long). rr identifies the last log entry after which it wants to stop the tracee, and sets a breakpoint at the appropriate stub function.
It took a few late nights and a couple of half-days of debugging but it works now and I landed it on master. (Though I expect there may be a few latent bugs to shake out.) The results are good:
This shows much improved replay overhead for Mochitest and Reftest, though not much improvement on Octane. Mochitest and Reftest are quite system-call intensive so our optimization gives big wins there. Mochitests spend a significant amount of time in the HTTP server, which is not recorded by rr, and therefore zero-overhead replay could actually run significantly faster than normal execution, so it's not surprising we're already getting close to parity there. Octane replay is dominated by SCHED context-switch events, each one of which we replay using relatively expensive trickery to context-switch at exactly the right moment.
For rr cognoscenti: as part of eliminating traps for replay of buffered syscalls, I also eliminated the traps for the ioctls that arm/disarm the deschedule-notification events. That was relatively easy (just replace those syscalls with noops during replay) and actually simplified code since we don't have to write those events to the trace and can wholly ignore them during replay.
There's definitely more that can be squeezed out of replay, and probably recording as well. E.g. currently we record a SCHED event every time we try to context-switch, even if we end up rescheduling the thread that was already running (which is common). We don't need to do that, and eliminating those events would reduce syscallbuf flushing and also the number of ptrace traps taken during replay. This should hugely benefit Octane. I'm trying to focus on easy rr improvements with big wins that are likely to pay off for Mozilla developers in the short term; it's difficult to know whether any given improvement is in that category, but I think SCHED elision during recording probably is. (We used to elide recorded SCHED events during replay, but that added significant complexity to reverse execution so I took it out.)
http://robert.ocallahan.org/2015/11/rr-replay-performance-improvements.html
|
Chris AtLee: Firefox builds on the Taskcluster Index |
You have have heard rumblings that FTP is going away...
Over the past few quarters we've been working to migrate our infrastructure off of the ageing "FTP" [1] system to Amazon S3.
We've maintained some backwards compatibility for the time being [2], so that current Firefox CI and release builds are still available via ftp.mozilla.org, or preferably, archive.mozilla.org since we don't support the ftp protocol any more!
Our long term plan is to make the builds available via the Taskcluster Index, and stop uploading builds to archive.mozilla.org
This is pretty big change, but we really think this will make it easier to find the builds you're looking for.
The Taskcluster Index allows us to attach multiple "routes" to a build job. Think of a route as a kind of hierarchical tag, or directory. Unlike regular directories, a build can be tagged with multiple routes, for example, according to the revision or buildid used.
A great tool for exploring the Taskcluster Index is the Indexed Artifact Browser
Here are some recent examples of nightly Firefox builds:
This same build (as of this writing) is also available via its revision:
gecko.v2.mozilla-central.nightly.revision.47b49b0d32360fab04b11ff9120970979c426911.firefox.win64-opt
Or the date:
gecko.v2.mozilla-central.nightly.2015.11.27.latest.firefox.win64-opt
The artifact browser is simply an interface on top of the index API. Using this API, you can also fetch files directly using wget, curl, python requests, etc.:
https://index.taskcluster.net/v1/task/gecko.v2.mozilla-central.nightly.latest.firefox.win64-opt/artifacts/public/build/firefox-45.0a1.en-US.win64.installer.exe [3]
Similar routes exist for other platforms, for B2G and mobile, and for opt/debug variations. I encourage you to explore the gecko.v2 namespace, and see if it makes things easier for you to find what you're looking for! [4]
Can't find what you want in the index? Please let us know!
[1] | A historical name referring back to the time when we used the FTP prototol to serve these files. Today, the files are available only via HTTP(S) |
[2] | in fact, all Firefox builds right now are currently uploaded to S3. we've just had to implement some compatibility layers to make S3 appear in many ways like the old FTP service. |
[3] | yes, you need to know the version number...for now. we're considering stripping that from the filenames. if you have thoughts on this, please get in touch! |
[4] | ignore the warning on the right about "Task not found" - that just means there are no tasks with that exact route; kind of like an empty directory |
http://atlee.ca/blog/posts/firefox-builds-on-the-taskcluster-index.html
|
Jan de Mooij: Math.random() and 32-bit precision |
Last week, Mike Malone, CTO of Betable, wrote a very insightful and informative article on Math.random() and PRNGs in general. Mike pointed out V8/Chrome used a pretty bad algorithm to generate random numbers and, since this week, V8 uses a better algorithm.
The article also mentioned the RNG we use in Firefox (it was copied from Java a long time ago) should be improved as well. I fully agree with this. In fact, the past days I've been working on upgrading Math.random() in SpiderMonkey to XorShift128+, see bug 322529. We think XorShift128+ is a good choice: we already had a copy of the RNG in our repository, it's fast (even faster than our current algorithm!), and it passes BigCrush (the most complete RNG test available).
While working on this, I looked at a number of different RNGs and noticed Safari/WebKit uses GameRand. It's extremely fast but very weak.
Most interesting to me, though, was that, like the previous V8 RNG, it has only 32 bits of precision: it generates a 32-bit unsigned integer and then divides that by UINT_MAX + 1
. This means the result of the RNG is always one of about 4.2 billion different numbers, instead of 9007199 billion (2^53). In other words, it can generate 0.00005% of all numbers an ideal RNG can generate.
I wrote a small testcase to visualize this. It generates random numbers and plots all numbers smaller than 0.00000131072.
Here's the output I got in Firefox (old algorithm) after generating 115 billion numbers:
And a Firefox build with XorShift128+:
In Chrome (before Math.random was fixed):
And in Safari:
These pics clearly show the difference in precision.
Safari and older Chrome versions both generate random numbers with only 32 bits of precision. This issue has been fixed in Chrome, but Safari's RNG should probably be fixed as well. Even if we ignore its suboptimal precision, the algorithm is still extremely weak.
Math.random() is not a cryptographically-secure PRNG and should never be used for anything security-related, but, as Mike argued, there are a lot of much better (and still very fast) RNGs to choose from.
http://jandemooij.nl/blog/2015/11/27/math-random-and-32-bit-precision/
|