-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Karl Dubost: CSS prefixes and gzip compression

Среда, 02 Декабря 2015 г. 03:35 + в цитатник

I was discussing with Mike how some Web properties are targeting only WebKit/Blink browsers (for their mobile sites) to the point that they do not add the standard properties for certain CSS features. We see that a lot in Japan, for example, but not only.

We often see things like this code:

.nBread{
    min-height: 50px;
    display: -webkit-box;
    -webkit-box-align: center;
    -webkit-box-pack: center;
    padding-bottom: 3px;
}

which is easily fixed by just adding the necessary properties:

.nBread{
    min-height: 50px;
    display: -webkit-box;
    -webkit-box-align: center;
    -webkit-box-pack: center;
    padding-bottom: 3px;
    display: flex;
    align-items: center;
    justify-content: center;
}

It would make the Web site more future resilient too.

gzip Compression and CSS

Adding standard properties costs a couple of bytes more in the CSS. Mike wondered if the compression would be interesting when it's about adding the standard property because of compression patterns:

#foo {
-webkit-box-shadow: 1px 1px 1px red;
box-shadow: 1px 1px 1px red;
}

Pattern of compression for a CSS file

It seems to be working. With Mike's idea I was wondering if the order was significative. So I tested by adding additional properties and changing the order

mike.prefix.css

#foo {
background-color: #fff;
-webkit-box-shadow: 1px 1px 1px red;
}

mike.both.css

#foo {
background-color: #fff;
-webkit-box-shadow: 1px 1px 1px red;
box-shadow:1px 1px 1px red;
}

mike.both-order.css

#foo {
-webkit-box-shadow: 1px 1px 1px red;
background-color: #fff;
box-shadow:1px 1px 1px red;
}

then doing similar tests than Mike.

Pattern of compression for a CSS file

Obviously the order matters, because it helps gzip to find text patterns to compress.

  • raw: 70 compressed:  98 gzip -c mike.prefix.css | wc -c
  • raw: 98 compressed: 100 gzip -c mike.both.css | wc -c
  • raw: 98 compressed: 106 gzip -c mike.both-order.css | wc -c

Flexbox and Gradients Drawbacks

For things like -webkit- flexbox and gradients, it doesn't help very much, because the syntaxes are very different (see the first piece of code in this post), but for properties were the standard properties is just about removing the prefix, the order matters. It would be interesting to test that on real long CSS files and not just a couple of properties.

Otsukare!

http://www.otsukare.info/2015/12/02/css-gzip-performance


Mozilla Addons Blog: De-coupling Reviews from Signing Unlisted Add-ons

Среда, 02 Декабря 2015 г. 02:34 + в цитатник

tl;dr – By the end of this week (December 4th), we plan to completely automate the signing of unlisted add-ons and remove the trigger for manual reviews.

Over the past few days, there have been discussions around the first step of the add-on signing process, which involves a programmatic review of submissions by a piece of code known as the “validator”. The validator can trigger a manual review of submissions for a variety of reasons and halt the signing process, which can delay the release of an add-on because of the signing requirement that will be enforced in Firefox 43 and later versions.

There has been debate over whether the validator is useful at all, since it is possible for a malicious player to write code that bypasses it. We agree the validator has limitations; the reality is we can only detect what we know about, and there’s an awful lot we don’t know about. But the validator is only one component of a review process that we hope will make it easier for developers to ship add-ons, and safer for people to use them. It is not meant to be a catch-all malware detection utility; rather, it is meant to help developers get add-ons into the hands of Firefox users more expediently.

With that in mind, we are going to remove validation as a gating mechanism for unlisted add-ons. We want to make it easier for developers to ship unlisted add-ons, and will perform reviews independently of any signing process. By the end of this week (December 4th), we plan to completely automate the signing of unlisted add-ons and remove the trigger for manual reviews. This date is contingent on how quickly we can make the technical, procedural, and policy changes required to support this. The add-ons signing API, introduced earlier this month, will allow for a completely automated signing process, and will be used as part of this solution.

We’ll continue to require developers to adhere to the Firefox Add-ons policies outlined on MDN, and would ask that they ensure their add-ons conform to those polices prior to submitting them for signing. Developers should also be familiar with the Add-ons Reviewer Guide, which outlines some of the more popular reasons an add-on would fail a review and be subject to blocklisting.

I want to thank everyone for their input and insights over the last week. We want to make sure the experience with Firefox is as painless as possible for Add-on developers and users, and our goals have never included “make life harder”, even if it sometimes seems that way. Please continue to speak out, and feel free to reach out to me or other team members directly.

I’ll post a more concrete overview of the next steps as they’re available, and progress will be tracked in bug 1229197. Thanks in advance for your patience.

kev

https://blog.mozilla.org/addons/2015/12/01/de-coupling-reviews-from-signing-unlisted-add-ons/


Chris AtLee: MozLando Survival Guide

Среда, 02 Декабря 2015 г. 00:31 + в цитатник

MozLando is coming!

I thought I would share a few tips I've learned over the years of how to make the most of these company gatherings. These summits or workweeks are always full of awesomeness, but they can also be confusing and overwhelming.

#1 Seek out people

It's great to have a (short!) list of people you'd like to see in person. Maybe somebody you've only met on IRC / vidyo or bugzilla?

Having a list of people you want to say "thank you" in person to is a great way to approach this. Who doesn't like to hear a sincere "thank you" from someone they work with?

#2 Take advantage of increased bandwidth

I don't know about you, but I can find it pretty challenging at times to get my ideas across in IRC or on an etherpad. It's so much easier in person, with a pad of paper or whiteboard in front of you. You can share ideas with people, and have a latency/lag-free conversation! No more fighting AV issues!

#3 Don't burn yourself out

A week of full days of meetings, code sprints, and blue sky dreaming can be really draining. Don't feel bad if you need to take a breather. Go for a walk or a jog. Take a nap. Read a book. You'll come back refreshed, and ready to engage again.

That's it!

I look forward to seeing you all next week!

http://atlee.ca/blog/posts/mozlando-survival-guide.html


Air Mozilla: Webdev Extravaganza: December 2015

Вторник, 01 Декабря 2015 г. 21:00 + в цитатник

Webdev Extravaganza: December 2015 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.

https://air.mozilla.org/webdev-extravaganza-december-2015/


Chris H-C: To-Order Telemetry Dashboards: dashboard-generator

Вторник, 01 Декабря 2015 г. 20:47 + в цитатник

Say you’ve been glued to my posts about Firefox Telemetry. You became intrigued by the questions you could answer and ask using actual data from actual users, and considered writing your own website using the single-API telemetry-wrapper.

However, you aren’t a web developer. You don’t like JavaScript. Or you’re busy. Or you don’t like reading READMEs on GitHub.

This is where dashboard-generator can step in to help out. Simply visit the website and build-your-own dash to your exacting specifications:

dashgenForBLog

Choose your channel, version, and metric. “-Latest-” will ensure that the generated dashboard will always use the latest version in the selected channel when you reload that page. Otherwise, you might find yourself always looking at GC_MS values from beta 39.

If you are only interested in clients reporting from a particular application, operating system, or with a certain E10s setting then make your choices in Filters.

If you want a histogram like telemetry.mozilla.org’s “Histogram Dashboard” then make sure you select Histogram and then choose if you want the ends of the histogram trimmed, whether (and how sensibly) you want to compare clients across particular settings, and whether to sanitize the results so you only use data that is valid and has a lot of samples.

If you want an evolution plot like telemetry.mozilla.org’s “Evolution Dashboard” then select Evolution. From there, choose whether to use the build date or submission date of samples, how many versions back from the selected one you would like to graph the values over, and whether to sanitize the results so you only use data that is valid and has a lot of samples.

Your choices made, click “Add to Dashboard”. Then choose again! And again!

Make a mistake? Don’t worry, you can remove rows using the ‘-‘ buttons.

Not sure what it’ll look like when you’re done? Hit ‘Generate Dashboard’ and you’ll get a preview in CodePen showing what it will look like and giving you an opportunity to fiddle with the HTML, CSS, and JS.

codepenForBlog

When you see something you like in the CodePen, hit ‘Save’ and it’ll give you a URL you can use to collaborate with others, and an option to ‘Export’ the whole site for when you want to self-host.

If you find any bugs or have any requests, please file an issue ticket here. I’ll be using it to write an E10s dashboard in the near term, and hope you’ll use it, too!

:chutten


https://chuttenblog.wordpress.com/2015/12/01/to-order-telemetry-dashboards-dashboard-generator/


Mozilla Fundraising: Mozilla’s New Donation Form Features

Вторник, 01 Декабря 2015 г. 01:45 + в цитатник
We’ve been redoing our donation form for this end of year campaign, and have a couple major changes. We’ve talked this in a previous post. Stripe Our first, and probably biggest change is using Stripe to accept non-PayPal donations. This … Continue reading

https://fundraising.mozilla.org/mozillas-new-donation-form-features/


Jan de Mooij: Testing Math.random(): Crushing the browser

Вторник, 01 Декабря 2015 г. 00:25 + в цитатник

(For tl;dr, see the Conclusion.)

A few days ago, I wrote about Math.random() implementations in Safari and (older versions of) Chrome using only 32 bits of precision. As I mentioned in that blog post, I've been working on upgrading Math.random() in SpiderMonkey to XorShift128+. V8 has been using the same algorithm since last week. (Update Dec 1: WebKit is now also using XorShift128+!)

The most extensive RNG test is TestU01. It's a bit of a pain to run: to test a custom RNG, you have to compile the library and then link it to a test program. I did this initially for the SpiderMonkey shell but after that I thought it'd be more interesting to use Emscripten to compile TestU01 to asm.js so we can easily run it in different browsers.

Today I tried this and even though I had never used Emscripten before, I had it running in the browser in less than an hour. Because the tests can take a long time, it runs in a web worker. You can try it for yourself here.

I also wanted to test window.crypto.getRandomValues() but unfortunately it's not available in workers.

Disclaimer: browsers implement Math functions like Math.sin differently and this can affect their precision. I don't know if TestU01 uses these functions and whether it affects the results below, but it's possible. Furthermore, some test failures are intermittent so results can vary between runs.

Results

TestU01 has three batteries of tests: SmallCrush, Crush, and BigCrush. SmallCrush runs only a few tests and is very fast. Crush and especially BigCrush have a lot more tests so they are much slower.

SmallCrush

Running SmallCrush takes about 15-30 seconds. It runs 10 tests with 15 statistics (results). Here are the number of failures I got:

Browser Number of failures
Firefox Nightly 1: BirthdaySpacings
Firefox with XorShift128+ 0
Chrome 48 11
Safari 9 1: RandomWalk1 H
Internet Explorer 11 1: BirthdaySpacings
Edge 20 1: BirthdaySpacings

Chrome/V8 failing 11 out of 15 is not too surprising. Again, the V8 team fixed this last week and the new RNG should pass SmallCrush.

Crush

The Crush battery of tests is much more time consuming. On my MacBook Pro, it finishes in less than an hour in Firefox but in Chrome and Safari it can take at least 2 hours. It runs 96 tests with 144 statistics. Here are the results I got:

Browser Number of failures
Firefox Nightly 12
Firefox with XorShift128+ 0
Chrome 48 108
Safari 9 33
Internet Explorer 11 14

XorShift128+ passes Crush, as expected. V8's previous RNG fails most of these tests and Safari/WebKit isn't doing too great either.

BigCrush

BigCrush didn't finish in the browser because it requires more than 512 MB of memory. To fix that I probably need to recompile the asm.js code with a different TOTAL_MEMORY value or with ALLOW_MEMORY_GROWTH=1.

Furthermore, running BigCrush would likely take at least 3 hours in Firefox and more than 6-8 hours in Safari, Chrome, and IE, so I didn't bother.

The XorShift128+ algorithm being implemented in Firefox and Chrome should pass BigCrush (for Firefox, I verified this in the SpiderMonkey shell).

About IE and Edge

I noticed Firefox (without XorShift128+) and Internet Explorer 11 get very similar test failures. When running SmallCrush, they both fail the same BirthdaySpacings test. Here's the list of Crush failures they have in common:

  • 11 BirthdaySpacings, t = 2
  • 12 BirthdaySpacings, t = 3
  • 13 BirthdaySpacings, t = 4
  • 14 BirthdaySpacings, t = 7
  • 15 BirthdaySpacings, t = 7
  • 16 BirthdaySpacings, t = 8
  • 17 BirthdaySpacings, t = 8
  • 19 ClosePairs mNP2S, t = 3
  • 20 ClosePairs mNP2S, t = 7
  • 38 Permutation, r = 15
  • 40 CollisionPermut, r = 15
  • 54 WeightDistrib, r = 24
  • 75 Fourier3, r = 20

This suggests the RNG in IE may be very similar to the one we used in Firefox (imported from Java decades ago). Maybe Microsoft imported the same algorithm from somewhere? If anyone on the Chakra team is reading this and can tell us more, it would be much appreciated :)

IE 11 fails 2 more tests that pass in Firefox. Some failures are intermittent and I'd have to rerun the tests to see if these failures are systematic.

Based on the SmallCrush results I got with Edge 20, I think it uses the same algorithm as IE 11 (not too surprising). Unfortunately the Windows VM I downloaded to test Edge shut down for some reason when it was running Crush so I gave up and don't have full results for it.

Conclusion

I used Emscripten to port TestU01 to the browser. Results confirm most browsers currently don't use very strong RNGs for Math.random(). Both Firefox and Chrome are implementing XorShift128+, which has no systematic failures on any of these tests.

Furthermore, these results indicate IE and Edge may use the same algorithm as the one we used in Firefox.

http://jandemooij.nl/blog/2015/11/30/testing-math-random-crushing-the-browser/


The Servo Blog: This Week In Servo 43

Понедельник, 30 Ноября 2015 г. 23:30 + в цитатник

In the last two weeks, we landed 165 PRs in the Servo organization’s repositories.

The huge news from the last two weeks is that after some really serious efforts from across the team and community to handle the libc changes required, we’ve upgraded Rust compiler versions! This change is more exciting than usual because it switches us from our custom Rust compiler and onto the nightlies produced by the Rust team. The following upgrade was really quick!

Now that we have separate support for making try builds, we have added dzbarsky, ecoal95, KiChjang, ajeffrey, and waffles. Please nominate your local friendly contributor today!

Notable additions

  • notriddle made GitHub look better
  • ms2ger ran rustfmt and began cleaning up our code
  • bholley landed type system magic for the layout wrapper
  • frewsxcv implemented a compile time url parsing macro
  • dzbarsky implemented currentColor for Canvas
  • pcwalton improved ipc error reporting
  • simonsapin removed string-cache’s plugin usage
  • mbrubeck fixed hit testing for iframe content
  • jgraham and crzytrickster did lots of webdriver work
  • evilpie implemented the document.domain getter
  • waffles improved the feedback when trying to open a missing file
  • mfeckie added “last modified” information to our “good first PR” aggregator, Servo Starters
  • frewsxcv landed compile-time URL parsing
  • kichjang provided MIME types for file:// URLs
  • pcwalton split the engine into multiple sandboxed processes

New Contributors

Screenshots

Screencast of this post being submitted to Hacker News:

(screencast)

Meetings

At the meeting two weeks ago we discussed intermittent test failures, using a mailing lists vs. discourse, the libcpocalypse, and our E-Easy issues. There was no meeting last week.

http://blog.servo.org/2015/11/30/twis-43/


Air Mozilla: Mozilla Weekly Project Meeting, 30 Nov 2015

Понедельник, 30 Ноября 2015 г. 22:00 + в цитатник

Kartikaya Gupta: Asynchronous scrolling in Firefox

Понедельник, 30 Ноября 2015 г. 21:32 + в цитатник

In the Firefox family of products, we've had asynchronous scrolling (aka async pan/zoom, aka APZ, aka compositor-thread scrolling) in Firefox OS and Firefox for Android for a while - even though they had different implementations, with different behaviors. We are now in the process of taking the Firefox OS implementation and bringing it to all our other platforms - including desktop and Android. After much hard work by many people, including but not limited to :botond, :dvander, :mattwoodrow, :mstange, :rbarker, :roc, :snorp, and :tn, we finally have APZ enabled on the nightly channel for both desktop and Android. We're working hard on fixing outstanding bugs and getting the quality up before we let it ride the trains out to DevEdition, Beta, and the release channel.

If you want to try it on desktop, note that APZ requires e10s to be enabled, and is currently only enabled for mousewheel/trackpad scrolling. We do have plans to implement it for other input types as well, although that may not happen in the initial release.

Although getting the basic machinery working took some effort, we're now mostly done with that and are facing a different but equally challenging aspect of this change - the fallout on web content. Modern web pages have access to many different APIs via JS and CSS, and implement many interesting scroll-linked effects, often triggered by the scroll event or driven by a loop on the main thread. With APZ, these approaches don't work quite so well because inherently the user-visible scrolling is async from the main thread where JS runs, and we generally avoid blocking the compositor on main-thread JS. This can result in jank or jitter for some of these effects, even though the main page scrolling itself remains smooth. I picked a few of the simpler scroll effects to discuss in a bit more detail below - not a comprehensive list by any means, but hopefully enough to help you get a feel for some of the nuances here.

Smooth scrolling

Smooth scrolling - that is, animating the scroll to a particular scroll offset - is something that is fairly common on web pages. Many pages do this using a JS loop to animate the scroll position. Without taking advantage of APZ, this will still work, but can result in less-than-optimal smoothness and framerate, because the main thread can be busy with doing other things.

Since Firefox 36, we've had support for the scroll-behavior CSS property which allows content to achieve the same effect without the JS loop. Our implementation for scroll-behavior without APZ enabled still runs on the main thread, though, and so can still end up being janky if the main thread is busy. With APZ enabled, the scroll-behavior implementation triggers the scroll animation on the compositor thread, so it should be smooth regardless of load on the main thread. Polyfills for scroll-behavior or old-school implementations in JS will remain synchronous, so for best performance we recommend switching to the CSS property where possible. That way as APZ rolls out to release, you'll get the benefits automatically.

Here is a simple example page that has a spinloop to block the main thread for 500ms at a time. Without APZ, clicking on the buttons results in a very janky/abrupt scroll, but with APZ it should be smooth.

position:sticky

Another common paradigm seen on the web is "sticky" elements - they scroll with the page for a bit, and then turn into position:fixed elements after a point. Again, this is usually implemented with JS listening for scroll events and updating the styles on the elements based on the scroll offset. With APZ, scroll events are going to be delayed relative to what the user is seeing, since the scroll events arrive on the main thread while scrolling is happening on the compositor thread. This will result in glitches as the user scrolls.

Our recommended approach here is to use position:sticky when possible, which we have supported since Firefox 32, and which we have support for in the compositor. This CSS property allows the element to scroll normally but take on the behavior of position:fixed beyond a threshold, even with APZ enabled. This isn't supported across all browsers yet, but there are a number of polyfills available - see the resources tab on the Can I Use position:sticky page for some options.

Again, here is a simple example page that has a spinloop to block the main thread for 500ms at a time. With APZ, the JS version will be laggy but the position:sticky version should always remain in the right place.

Parallax

Parallax. Oh boy. There's a lot of different ways to do this, but almost all of them rely on listening to scroll events and updating element styles based on that. For the same reasons as described in the previous section, implementations of parallax scrolling that are based on scroll events are going to be lagging behind the user's actual scroll position. Until recently, we didn't have a solution for this problem.

However, a few days ago :mattwoodrow landed compositor support for asynchronous scroll adjustments of 3D transforms, which allows a pure CSS parallax implementation to work smoothly with APZ. Keith Clark has a good writeup on how to do this, so I'm just going to point you there. All of his demo pages should scroll smoothly in Nightly with APZ enabled.

Unfortunately, it looks like this CSS-based approach may not work well across all browsers, so please make sure to test carefully if you want to try it out. Also, if you have suggestions on other methods on implementing parallax so that it doesn't rely on a responsive main thread, please let us know. For example, :mstange created one at http://tests.themasta.com/transform-fixed-parallax.html which we should be able to support in the compositor without too much difficulty.

Other features

I know that there are other interesting scroll-linked effects that people are doing or want to do on the web, and we'd really like to support them with asynchronous scrolling. The Blink team has a bunch of different proposals for browser APIs that can help with these sorts of things, including things like CompositorWorker and scroll customization. For more information and to join the discussion on these, please see the public-houdini mailing list. We'd love to get your feedback!

(Thanks to :botond and :mstange for reading a draft of this post and providing feedback.)

https://staktrace.com/spout/entry.php?id=834


Gijs Kruitbosch: Did it land?

Понедельник, 30 Ноября 2015 г. 16:22 + в цитатник

I wrote a thing to check if your patch landed/stuck. It’s on github because that’s what people seem to do these days. That means you can use it here:

Did it land?

The “point” of this mini-project is to be able to easily determine whether bug X made today’s nightly, or if bug Y landed in beta 5. Sometimes non-graph changelogs, such as are most accessible on hgweb, can be misleading (ie beta 5 was tagged after you landed, but on a revision before you landed…), plus it’s boring to look up revisions manually in a bug, and then look them up on hgweb, and then try to determine if revision A is in the ancestry tree for revision B. So I automated it.

Note that the tool doesn’t:

  • deal cleverly with backouts. It’ll give you revision hashes from the bug, but if it notices comments that seem to indicate something got backed out, it will be cautious about saying “yes, this landed”. If you know that you bounced once but the last revision(s) is/are definitely “enough” to have the fixes be considered “landed”, then you can just switch to looking up a revision instead of a bug, copy-paste the last hash, and try that one. With a bit of work it could probably expose the internal data about which commits landed before a nightly in the UI – the data is there!
  • use hg to extract the bug metadata. It’s dumb and just asks for a bug’s comments from bugzilla. Pull requests or other help about how to do this “properly” welcome.
  • deal cleverly with branching. If you select aurora/beta, it will look for commits that landed on aurora/beta, not for commits that landed on “earlier” trees and made their way down to aurora/beta with the regular train. This is not super hard to fix, I think, but I haven’t gotten around to it, and I don’t think it will be a very common case.
  • have a particularly nice UI. Feel free to send me pull requests to make it look better.

http://www.gijsk.com/blog/2015/11/did-it-land/


Andreas Tolfsen: WebDriver update from TPAC 2015

Понедельник, 30 Ноября 2015 г. 16:19 + в цитатник

I came back from the TPAC (the W3C’s Technical Plenary/Advisory Committee meeting week) earlier this month, where I attended the Browser Tools- and Testing Working Group’s meetings on WebDriver.

Unlike previous meetings, this was the first time we had a reasonably up-to-date specification text to discuss. That was clearly not a bad idea to have because we were able to make some defining decisions on long-standing, controversial topics. This shows how important it is for assigned action items to be completed in time before a specification meeting, and to have someone with time dedicated to working on the spec.

Visibility

The WG decided to punt the element visibility, or “displayedness” concept, to level 2 of the specification and in the meantime push for better visibility primitives in the platform. I’ve previously outlined in detail the reasons why it’s not just a bad idea—but impossible—for WebDriver to specify this concept. Instead we will provide a non-normative description of Selenium’s visibility atom in an appendix to give some level of consistency for implementors.

Fortunately Selenlium’s visibility approximation atom can be implemented entirely in content JavaScript, which means it can be provided in both client bindings and as extension commands.

This does not mean we are giving up on visibility. There is general agreement in the WG that it is a desirable feature, but since it’s impossible to define naked eye visibility using existing platform APIs we call upon other WGs to help outline this. Visibility of elements in viewport is not a primitive that naturally fits within the scope of WebDriver.

Our decision has implications for element interactability, which is used to determine if you can interact with an element. This previously relied on the element visibility algorithm, but as an alternative to the tree traversal visibility algorithm we dismissed, we are experimenting with a somewhat na"ive hit-testing alternative that takes the centre coordinates of the portion of the element inside the viewport and calls elementsAtPoint, ignoring elements that are opaque.

Attributes and properties

We had previously decided to make two separate commands for getting attributes and properties. This was controversial because it deviates from the behaviour of Selenium’s getAttribute, that conflates the DOM concepts of attributes and properties.

Because the WG decided to stick with David Burns’s proposal on special-casing boolean attributes, the good news is that the Selenium behaviour can be emulated using WebDriver primitives.

In practice this means that when Get Element Attribute is called for an element that carries a boolean attribute, this will return a string "true", rather than the DOM attribute value which would normally be an empty string. We return a string so that dynamically typed programming languages can evaluate this into something truthful, and because there is a belief in the WG that an empty string return value for e.g. , would be confusing to users.

Because we don’t know which attributes are boolean attributes from the DOM’s point of view, it’s not the cleanest approach since it means we must maintain a hard-coded list in WebDriver. It will also arguably cause problems for custom elements, because it is not given that they mirror the default attribute values.

Test suite

One of the requirements for moving to REC is writing a decent test suite. WebDriver is in the fortunate position that it’s an evolution of existing implementations, each with their own body of tests, many of whom we can probably re-purpose. One of the challenges with the existing tests is that the harness does not easily allow for testing the lower level details of the protocol.

So far I have been able to make a start with merging Microsoft’s pending pull requests. Not all the tests merged match what the specification mandates any longer, but we decided to do this before any substantial harness work is done, to eliminate the need for Microsoft to maintain their own fork of Web Platform Tests.

Onwards

Microsoft and Mozilla are both working on implementations, so there is a pressing need for a test suite that reflects the realities of the specification. Vital chapters, such as Element Retrieval and Interactions, are either undefined or in such a poor state that they should be considered unimplementable.

Despite these reservations, I’d say the WebDriver spec is in a better state than ever before. At TPAC we also had meetings about possible future extensions, including permissions and how WebDriver might help facilitate testing of WebBluetooth as well as other platform APIs.

The WG is concurrently pushing for WebDriver to be used in Web Platform Tests to automate the “non-automatable” test cases that require human interaction or privileged access. In fact, there’s an ongoing Quarter of Contribution project sponsored by Mozilla to work on facilitating WebDriver in a sort of “meta-circular” fashion, directly from testharness.js tests.

But more on that later. (-:

https://sny.no/2015/11/tpac


This Week In Rust: This Week in Rust 107

Понедельник, 30 Ноября 2015 г. 08:00 + в цитатник

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42, brson, and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Projects

  • Diesel. A safe, extensible ORM and Query Builder for Rust.
  • Chomp. Fast parser combinator library for Rust.
  • libkeccak-tiny. A tiny implementation of SHA-3, SHAKE, Keccak, and sha3sum in Rust.
  • Waitout. Simple interface for tracking and awaiting the completion of multiple asynchounous tasks.

Updates from Rust Core

69 pull requests were merged in the last week.

See the triage digest and subteam reports for more details.

Notable changes

New Contributors

  • androm3da
  • ebadf
  • Ivan Stankovic
  • Jack Fransham
  • Jeffrey Seyfried
  • Josh Austin
  • Kevin Yeh
  • Matthias Bussonnier
  • Philipp Matthias Sch"afer
  • xd1le

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Crate of the Week

This week's Crate of the Week is Chrono, a crate that offers very handy timezone-aware Duration and Date/Time types.

Thanks to Ygg01 for the suggestion. Submit your suggestions for next week!

http://this-week-in-rust.org/blog/2015/11/30/this-week-in-rust-107/


Emma Irwin: Revisiting the Word ‘Recognition’ in #FOSS and the Dream of Open Credentials

Понедельник, 30 Ноября 2015 г. 04:38 + в цитатник

I think a lot about ways we can better surface Participation as real-world offering for professional and personal development.

And this tweet from Laura  triggered all kinds of thinking.

Check out this @BryanMMathers and @dajbelshaw on why open source needs open badges: https://t.co/9By0pyiCd0 @opensourceway

— Laura Hilliger (@epilepticrabbit) November 27, 2015

Most thinking was reminiscent at first. 

Working on open projects teaches relevant skills, helps establish mentorship relationships and surfaces hidden strengths and talents. It’s my own story.

And then reflective..

The reason we’ve struggled to make participation a universally recognized opportunity for credential building, is our confusion over the term ‘recognition’. In Open Source we use this term to mean of similar, yet entirely different meanings:

* Gratitude (“hey thanks for that !”)

* You’re making progress (“great work, keep going! “)

* Appreciation (“we value you”)

* You completed or finished something (congratulations you did it!)

In my opinion, many experiments with badges for FOSS participation have actually compounded the problem: If I am issued a badge I didn’t request( and I have many of these) , or don’t value ( I have many of these too) we’re using the process as a prod and not as a genuine acknowledgement of accomplishment.  That’s OK, gamification is OK – but it’s not credential building in the real-world sense, we need to separate these two ‘use cases’ to move forward with open credentials. 

And I kept thinking…

The Drupal community already does a good job at helping people surface real-world credentials.  Drupal.org member profiles expose contribution and community leadership, while  business profiles  demonstrate (and advertise) their commitment through project sponsorship, and contribution.  Drupal also has this fantastic series of project ladders which I’ve always thought would be a great way to experiment with badges, designing connected learning experiences through participation.  Drupal ladders definitely inspired my own work with around a ‘Participation Standard‘ , and I wonder how projects can work together a bit more on defining a standard for  ‘Distributed Recognition’ even between projects like Mozilla, Drupal and Fedora.  

@sunnydeveloper oh I agree! Drupal has its own special benefits from this too, around distributed recognition of contribution /@dajbelshaw

— Rachel Lawson (@rachel_norfolk) November 27, 2015

And the relentless thinking continued…

@makerbase has potential to profile FOSS communities, but without manual-additions being the only way to add contributors. — Emma Irwin (@sunnydeveloper) November 28, 2015

@sunnydeveloper we are definitely thinking about that! /cc @amateurhuman

— Anil Dash (@anildash) November 28, 2015

I then posed the question in our Discourse — asking what ‘Open Credentials’ could look like for Participation at Mozilla . And there are some great responses so far, including solutions like Makerbase and   reminder of of how hard it current is to be ‘seen’ in the Mozilla community, and thus how important this topic actually is.

 Open_Certification_-_Participation_-_Mozilla_Discourse_-_2015-11-29_17.19.40

 

 

 

 

 

 

 

And the thinking will continue, hopefully as a growing group ….

What I do know is that we have to stop using the word recognition as the catch all, and that there is huge opportunity to build Open Credentials through Participation and leadership framework might be a way to test what that looks like.

If you have opinions – would love to have you join our discussion thread!

image by jingleslenobel CC by-NC-ND 2.0

FacebookTwitterGoogle+Share

http://tiptoes.ca/open-certification/


Robert O'Callahan: Even More rr Replay Performance Improvements!

Воскресенье, 29 Ноября 2015 г. 23:33 + в цитатник

While writing my last blog post I realized I should try to eliminate no-op reschedule events from rr traces. The patch turned out to be very easy, and the results are impressive:

Now replay is faster than recording in all the benchmarks, and for Mochitest is about as fast as normal execution. (As discussed in my previous post, this is probably because the replay excludes some code that runs during normal execution: the test harness and the HTTP server.) Hopefully this turns into real productivity gains for rr users.

http://robert.ocallahan.org/2015/11/even-more-rr-replay-performance.html


Adam Roach: Better Living through Tracking Protection

Воскресенье, 29 Ноября 2015 г. 03:26 + в цитатник
There's been a bit of a hullabaloo in the press recently about blocking of ads in web browsers. Very little of the conversation is new, but the most recent round of discussion has been somewhat louder and more excited, in part because of Apple's recent decision to allow web content blockers on the iPhone and iPad.

In this latest round of salvos, the online ad industry has taken a pretty brutal beating, and key players appear to be rethinking long-entrenched strategies. Even the Interactive Advertising Bureau -- who has referred to ad blocking as "robbery" and "an extortionist scheme" -- has gone on record to admit that the Internet ads got so bad that users basically had no choice but to start blocking them.

So maybe things will get better in the coming months and years, as online advertisers learn to moderate their behavior. Past behavior shows a spotty track record in this area, though, and change will come slowly. In the meanwhile, there are some pretty good tools that can help you take back control of your web experience.

How We Got Here

While we probably all remember the nadir of online advertising -- banners exhorting users to "punch the monkey to win $50", epilepsy-inducing ads for online gambling, and out-of-control popup ads for X10 cameras -- the truth is that most ad networks have already pulled back from the most obvious abuses of users' eyeballs. It would appear that annoying users into spending money isn't a winning strategy.

Unfortunately, the move away from hyperkinetic ads to more subtle ones was not a retreat as much as a carefully calculated refinement. Ads nowadays are served by colossal ad networks with tendrils on every site -- and they're accompanied by pretty sophisticated code designed to track you around the web.

The thought process that went into this is: if we can track you enough, we learn a lot about who you are and what your interests are. This is driven by the premise that people will be less annoyed by ads that actually fit their interests; and, at the same time, such ads are far more likely to convert into a sale.

Matching relevant ads to users was a reasonable goal. It should have been a win-win for both advertisers and consumers, as long as two key conditions were met: (1) the resulting system didn't otherwise ruin the web browsing experience, and (2) users who don't want to have their personal movements across the web could tell advertisers not to track them, and have those requests honored.

Neither is true.

Tracking Goes off the Rails

Just like advertisers went overboard with animated ads, pop-ups, pop-unders, noise-makers, interstitials, and all the other overtly offensive behavior, they've gone overboard with tracking.

You hear stories of overreach all the time: just last night, I had a friend recount how she got an email (via Gmail) from a friend that mentioned front-loaders, and had to suffer through weeks of banner ads for construction equipment on unrelated sites. The phenomenon is so bad and so well-known, even The Onion is making fun of it.

Beyond the "creepy" factor of having ad agencies building a huge personal profile for you and following you around the web to use it, user tracking code itself has become so bloated as to ruin the entire web experience.

In fact, on popular sites such as CNN, code to track users accounts for somewhere on the order of three times as much memory usage as the actual page content: a recent demo of the Firefox memory tracking tool found that 30 MB of the 40 MB used to render a news article on CNN was consumed by code whose sole purpose was user tracking.

This drags your browsing experience to a crawl.

Ad Networks Know Who Doesn't Want to be Tracked, But Don't Care.

Under the assumption that advertisers were actually willing to honor user choice, there has been a large effort to develop and standardize a way for users to indicate to ad networks that they didn't want to be tracked. It's been implemented by all major browsers, and endorsed by the FTC.

For this system to work, though, advertisers need to play ball: they need to honor user requests not to be tracked. As it turns out, advertisers aren't actually interested in honoring users' wishes; as before, they see a tiny sliver of utility in abusing web users with the misguided notion that this somehow translates into profits. Attempts to legislate conformance were made several years ago, but these never really got very far.

So what can you do? The balance of power seems so far out of whack that consumers have little choice than to sit back and take it.

You could, of course, run one of any number of ad blockers -- Adblock Plus is quite popular -- but this is a somewhat nuclear option. You're throwing out the slim selection of good players with the bad ones; and, let's face it, someone's gotta provide money to keep the lights on at your favorite website.

Even worse, many ad blockers employ techniques that consume as much (or more) memory and as much (or more) time as the trackers they're blocking -- and Adblock Plus is one of the worst offenders. They'll stop you from seeing the ads, but at the expense of slowing down everything you do on the web.

What you can do

When people ask me how to fix this, I recommend a set of three tools to make their browsing experience better: Firefox Tracking Protection, Ghostery, and (optionally) Privacy Badger. (While I'm focusing on Firefox here, it's worth noting that both Ghostery and Privacy Badger are also available for Chrome.)

1. Turn on Tracking Protection

Firefox Tracking Protection is automatically activated in recent versions of Firefox whenever you enter "Private Browsing" mode, but you can also manually turn it on to run all the time. If you go to the URL bar and type in "about:config", you'll get into the advanced configuration settings for Firefox (you may have to agree to be careful before it lets you in). Search for a setting called "privacy.trackingprotection.enabled", and then double-click next to it where it says "false" to change it to "true." Once you do that, Tracking Protection will stay on regardless of whether you're in private browsing mode.

Firefox tracking protection uses a curated list of sites that are known to track you and known to ignore the "Do Not Track" setting. Basically, it's a list of known bad actors. And a study of web page load times determined that just turning it on improves page load times by a median of 44%.

2. Install and Configure Ghostery

There's also an add-on that works similar to Tracking Protection, called Ghostery. Install it from addons.mozilla.org, and then go into its configuration (type "about:addons" into your URL bar, and select the "Preferences" button next to Ghostery). Now, scroll down to "blocking options," near the bottom of the page. Under the "Trackers" tab, click on "select all." Then, uncheck the "widgets" category. (Widgets can be used to track you, but they also frequently provide useful functions for a web page: they're a mixed bag, but I find that their utility outweighs their cost).

Ghostery also uses a curated list, but it's far more aggressive in what it considers to be tracking. It also allows you fine-grained control over what you block, and lets you easily whitelist sites, if you find that they're not working quite right with all the potential trackers removed.

Poke around at the other options in there, too. It's really a power-users tracker blocker.

3. Optionally, Install Privacy Badger

Unlike tracking protection and Ghostery, Privacy Badger isn't a curated list of known trackers. Instead, it's a tool that watches what webpages do. When it sees behavior that could be used to track users across multiple sites, it blocks that behavior from ever happening again. So, instead of knowing ahead of time what to block, it learns what to block. In other words, it picks up where the other two tools leave off.

This sounds really good on paper, and does work pretty well in practice. I ran with Privacy Badger turned on for about a month, with mostly good results. Unfortunately, its "learning" can be a bit aggressive, and I found that it broke sites far more frequently than Ghostery. So the trade-off here: if you run Privacy Badger, you'll have much better protection against tracking, but you'll also have to be alert to the kinds of defects that it can introduce, and go turn it off when it interferes with what you're trying to do. Personally, I turned it off a few months ago, and haven't bothered to reactivate it yet; but I'll be checking back periodically to see if they've tuned their algorithms (and their yellow-list) to be more user-friendly.

If you're interested in giving it a spin, you can download Privacy Badger from the addons.mozilla.org website.

http://sporadicdispatches.blogspot.com/2015/11/better-living-through-tracking.html


John O'Duinn: “Distributed” ER#3 now available!

Суббота, 28 Ноября 2015 г. 23:29 + в цитатник

Book Cover for DistributedEarlier this week, just before the US Thanksgiving holidays, we shipped Early Release #3 for my “Distributed” book-in-progress.

Early Release #3 (ER#3) adds two new chapters: Ch.1 remoties trends, Ch.2 the real cost of an office, and many tweaks/fixes to the previous Chapters. There are now a total of 9 chapters available (1,2,4,6,7,8,10,13,15) arranged into three sections. These chapters were the inspiration for recent presentations and blog posts here, here and here.)

ER#3 comes one month after ER#2. You can buy ER#3 by clicking here, or clicking on the thumbnail of the book cover. Anyone who already has ER#1 or ER#2 should get prompted with a free update to ER#3. (If you don’t please let me know!). And yes, you’ll get updated when ER#4 comes out next month.

Please let me know what you think of the book so far. Your feedback get to help shape/scope the book! Is there anything I should add/edit/change? Anything you found worked for you, as a “remotie” or person in a distributed team, which you wish you knew when you were starting? If you were going to setup a distributed team today, what would you like to know before you started?

Thank you to everyone who’s already sent me feedback/opinions/corrections – all changes that are making the book better. I’m merging changes/fixes as fast as I can – some days are fixup days, some days are new writing days. All great to see coming together. To make sure that any feedback doesn’t get lost or caught in spam filters, it’s best to email a special email address (feedback at oduinn dot com) although feedback via twitter and linkedin works also. Thanks again to everyone for their encouragement, proof-reading help and feedback so far.

Now, it’s time to get back to typing. ER#4 is coming soon!

John.

http://oduinn.com/blog/2015/11/28/distributed-er3-now-available/


Robert O'Callahan: rr Replay Performance Improvements

Суббота, 28 Ноября 2015 г. 08:04 + в цитатник

I've been spending a lot of time using rr, as have some other Mozilla developers, and it occurred to me a small investment in speeding up the debugging experience could pay off in improved productivity quite quickly. Until recently no-one had ever really done any work to speed up replay, so there was some low-hanging fruit.

During recording we avoid trapping from tracees to the rr process for common syscalls (read, clock_gettime and the like) with an optimization we call "syscall buffering". The basic idea is that the tracee performs the syscall "untraced", we use a seccomp-bpf predicate to detect that the syscall should not cause a ptrace trap, and when the syscall completes the tracee copies its results to a log buffer. During replay we do not use seccomp-bpf; we were using PTRACE_SYSEMU to generate a ptrace trap for every syscall and then emulating the results of all syscalls from the rr process. The obvious major performance improvement is to avoid generating ptrace traps for buffered syscalls during replay, just as we do during recording.

This was tricky to do while preserving our desired invariants that control flow is identical between recording and replay, and data values (in application memory and registers) are identical at all times. For example consider the recvmsg system call, which takes an in/out msg parameter. During recording syscall wrappers in the tracee would copy msg to the syscall log buffer, perform the system call, then copy the data from the log buffer back to msg. Hitherto, during replay we would trap on the system call and copy the saved buffer contents for that system call to the tracee buffer, whereupon the tracee syscall wrappers would copy the data out to msg. To avoid trapping to rr for a sequence of such syscalls we need to copy the entire syscall log buffer to the tracee before replaying them, but then the syscall wrapper for recvmsg would overwrite the saved output when it copies msg to the buffer! I solved this, and some other related problems, by introducing a few functions that behave differently during recording and replay while preserving control flow and making sure that register values only diverge temporarily and only in a few registers. For this recvmsg case I introduced a function memcpy_input_parameter which behaves like memcpy during recording but is a noop during replay: it reads a global is_replay flag and then does a conditional move to set the source address to the destination address during replay.

Another interesting problem is recapturing control of the tracee after it has run a set of buffered syscalls. We need to trigger some kind of ptrace trap after reaching a certain point in the syscall log buffer, without altering the control flow of the tracee. I handled this by generating a large array of stub functions (each only one byte, a RET instruction) and after processing the log buffer entry ending at offset O, we call stub function number O/8 (each log record is at least 8 bytes long). rr identifies the last log entry after which it wants to stop the tracee, and sets a breakpoint at the appropriate stub function.

It took a few late nights and a couple of half-days of debugging but it works now and I landed it on master. (Though I expect there may be a few latent bugs to shake out.) The results are good:

This shows much improved replay overhead for Mochitest and Reftest, though not much improvement on Octane. Mochitest and Reftest are quite system-call intensive so our optimization gives big wins there. Mochitests spend a significant amount of time in the HTTP server, which is not recorded by rr, and therefore zero-overhead replay could actually run significantly faster than normal execution, so it's not surprising we're already getting close to parity there. Octane replay is dominated by SCHED context-switch events, each one of which we replay using relatively expensive trickery to context-switch at exactly the right moment.

For rr cognoscenti: as part of eliminating traps for replay of buffered syscalls, I also eliminated the traps for the ioctls that arm/disarm the deschedule-notification events. That was relatively easy (just replace those syscalls with noops during replay) and actually simplified code since we don't have to write those events to the trace and can wholly ignore them during replay.

There's definitely more that can be squeezed out of replay, and probably recording as well. E.g. currently we record a SCHED event every time we try to context-switch, even if we end up rescheduling the thread that was already running (which is common). We don't need to do that, and eliminating those events would reduce syscallbuf flushing and also the number of ptrace traps taken during replay. This should hugely benefit Octane. I'm trying to focus on easy rr improvements with big wins that are likely to pay off for Mozilla developers in the short term; it's difficult to know whether any given improvement is in that category, but I think SCHED elision during recording probably is. (We used to elide recorded SCHED events during replay, but that added significant complexity to reverse execution so I took it out.)

http://robert.ocallahan.org/2015/11/rr-replay-performance-improvements.html


Chris AtLee: Firefox builds on the Taskcluster Index

Суббота, 28 Ноября 2015 г. 00:21 + в цитатник

RIP FTP?

You have have heard rumblings that FTP is going away...

61319299.jpg

Over the past few quarters we've been working to migrate our infrastructure off of the ageing "FTP" [1] system to Amazon S3.

We've maintained some backwards compatibility for the time being [2], so that current Firefox CI and release builds are still available via ftp.mozilla.org, or preferably, archive.mozilla.org since we don't support the ftp protocol any more!

Our long term plan is to make the builds available via the Taskcluster Index, and stop uploading builds to archive.mozilla.org

How do I find my builds???

65722041.jpg

This is pretty big change, but we really think this will make it easier to find the builds you're looking for.

The Taskcluster Index allows us to attach multiple "routes" to a build job. Think of a route as a kind of hierarchical tag, or directory. Unlike regular directories, a build can be tagged with multiple routes, for example, according to the revision or buildid used.

A great tool for exploring the Taskcluster Index is the Indexed Artifact Browser

Here are some recent examples of nightly Firefox builds:

The latest win64 nightly Firefox build is available via the
gecko.v2.mozilla-central.nightly.latest.firefox.win64-opt route

This same build (as of this writing) is also available via its revision:

gecko.v2.mozilla-central.nightly.revision.47b49b0d32360fab04b11ff9120970979c426911.firefox.win64-opt

Or the date:

gecko.v2.mozilla-central.nightly.2015.11.27.latest.firefox.win64-opt

The artifact browser is simply an interface on top of the index API. Using this API, you can also fetch files directly using wget, curl, python requests, etc.:

https://index.taskcluster.net/v1/task/gecko.v2.mozilla-central.nightly.latest.firefox.win64-opt/artifacts/public/build/firefox-45.0a1.en-US.win64.installer.exe [3]

Similar routes exist for other platforms, for B2G and mobile, and for opt/debug variations. I encourage you to explore the gecko.v2 namespace, and see if it makes things easier for you to find what you're looking for! [4]

Can't find what you want in the index? Please let us know!

[1]A historical name referring back to the time when we used the FTP prototol to serve these files. Today, the files are available only via HTTP(S)
[2]in fact, all Firefox builds right now are currently uploaded to S3. we've just had to implement some compatibility layers to make S3 appear in many ways like the old FTP service.
[3]yes, you need to know the version number...for now. we're considering stripping that from the filenames. if you have thoughts on this, please get in touch!
[4]ignore the warning on the right about "Task not found" - that just means there are no tasks with that exact route; kind of like an empty directory

http://atlee.ca/blog/posts/firefox-builds-on-the-taskcluster-index.html


Jan de Mooij: Math.random() and 32-bit precision

Пятница, 27 Ноября 2015 г. 23:45 + в цитатник

Last week, Mike Malone, CTO of Betable, wrote a very insightful and informative article on Math.random() and PRNGs in general. Mike pointed out V8/Chrome used a pretty bad algorithm to generate random numbers and, since this week, V8 uses a better algorithm.

The article also mentioned the RNG we use in Firefox (it was copied from Java a long time ago) should be improved as well. I fully agree with this. In fact, the past days I've been working on upgrading Math.random() in SpiderMonkey to XorShift128+, see bug 322529. We think XorShift128+ is a good choice: we already had a copy of the RNG in our repository, it's fast (even faster than our current algorithm!), and it passes BigCrush (the most complete RNG test available).

While working on this, I looked at a number of different RNGs and noticed Safari/WebKit uses GameRand. It's extremely fast but very weak.

Most interesting to me, though, was that, like the previous V8 RNG, it has only 32 bits of precision: it generates a 32-bit unsigned integer and then divides that by UINT_MAX + 1. This means the result of the RNG is always one of about 4.2 billion different numbers, instead of 9007199 billion (2^53). In other words, it can generate 0.00005% of all numbers an ideal RNG can generate.

I wrote a small testcase to visualize this. It generates random numbers and plots all numbers smaller than 0.00000131072.

Here's the output I got in Firefox (old algorithm) after generating 115 billion numbers:

And a Firefox build with XorShift128+:

In Chrome (before Math.random was fixed):

And in Safari:

These pics clearly show the difference in precision.

Conclusion

Safari and older Chrome versions both generate random numbers with only 32 bits of precision. This issue has been fixed in Chrome, but Safari's RNG should probably be fixed as well. Even if we ignore its suboptimal precision, the algorithm is still extremely weak.

Math.random() is not a cryptographically-secure PRNG and should never be used for anything security-related, but, as Mike argued, there are a lot of much better (and still very fast) RNGs to choose from.

http://jandemooij.nl/blog/2015/11/27/math-random-and-32-bit-precision/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 220 219 [218] 217 216 ..
.. 1 Календарь