-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Support.Mozilla.Org: What’s up with SUMO – 27th November

Пятница, 27 Ноября 2015 г. 19:48 + в цитатник

Hello, SUMO Nation!

Have you had a good week so far? We hope you have! Here are a few pertinent updates from the world of SUMO for your reading pleasure.

Welcome, new contributors!

…at least that’s the only one we know of! So, if you joined us recently, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

  • Scribe & Phoxuponyou – for their constant contributions on the support forum – cheers!
  • Costenslayer – for offering to help us with cloning our YT videos to AirMo – thanks!

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

Reminder: the next SUMO Community meeting…

  • …is going to take place on Monday, 30th of November. Join us!
  • If you want to add a discussion topic to upcoming the live meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).

Developers

Community

Support Forum

Firefox

  • for Desktop
    • GTK+3 is required for Firefox on Linux as of Beta 43 (the full release comes on the 15th of December).
  •  for iOS
    • Version 1.2 is out – please go ahead and test it on your devices!
    • Version 2.0 is going to happen in 2016 (we just don’t know when exactly… yet), and should add synchronization from iOS to Desktop and/or Android.
  • Firefox OS
    • Peace and quiet makes for the start of a good weekend!
Thank you for reading all the way to the end! We hope you join us on Monday (and beyond that day), and wish you a great, relaxing weekend. Take it easy and stay foxy!

https://blog.mozilla.org/sumo/2015/11/27/whats-up-with-sumo-27th-november/


Mozilla Fundraising: A/B Test: Three-page vs One-page donation flow

Пятница, 27 Ноября 2015 г. 15:16 + в цитатник
Here are the results of our first A/B test from our 2015 End of Year fundraising campaign. Three page flow (our control) In our control (above) credit card donations are processed (via Stripe) from within our user interface, in a … Continue reading

https://fundraising.mozilla.org/ab-test-three-page-vs-one-page-donation-flow/


Dustin J. Mitchell: Remote GPG Agent

Пятница, 27 Ноября 2015 г. 15:00 + в цитатник

Private keys should be held close -- the fewer copies of them, and the fewer people have access to them, the better. SSH agents, with agent forwarding, do a pretty good job of this. For quite a long time, I've had my SSH private key stored only on my laptop and desktop, with a short script to forward that agent into my remote screen sessions. This works great: while I'm connected and my key is loaded, I can connect to hosts and push to repositories with no further interaction. But once I disconnect, the screen sessions can no longer access the key.

Doing the same for GPG keys turns out to be a bit harder, not helped by the lack of documentation from GnuPG itself. In fact, as far as I can tell, it was impossible before GnuPG 2.1, and a great deal more difficult before OpenSSH 6.7.

I don't want exactly the same thing, anyway: I only need access to my GPG private keys once every few days (to sign a commit, for example) So I'd like to control exactly when I make the agent available.

The solution I have found involves this shell script, named remote-gpg:

#! /bin/bash

set -e

host=$1
if [ -z "$host" ]; then
    echo "Supply a hostname"
    exit 1
fi

# remove any existing agent socket (in theory `StreamLocalBindUnlink yes` does this,
# but in practice, not so much)
ssh $host rm -f ~/.gnupg/S.gpg-agent
ssh -t -R ~/.gnupg/S.gpg-agent:.gnupg/S.gpg-agent-extra $host \
    sh -c 'echo; echo "Perform remote GPG operations and hit enter"; \
        read; \
        rm -f ~/.gnupg/S.gpg-agent'; 

The critical bit of configuration was to add the following to .gnupg/gpg-agent.conf on my laptop and desktop:

extra-socket /home/dustin/.gnupg/S.gpg-agent-extra

and then kill the agent to reload the config:

gpg-connect-agent reloadagent /bye

The idea is this: the local GPG agent (on the laptop or desktop) publishes this "extra" socket specifically for forwarding to remote machines. The set of commands accepted over the socket is limited, although it does include access to the key material. The SSH command then forwards the socket (this functionality was added in OpenSSH 6.7) to the remote host, after first deleting any existing socket. That command displays a prompt, waits for the user to signal completion of the operation, then cleans up.

To use this, I just open a new terminal or local screen window and run remote-gpg euclid. If my key is not already loaded, I'm prompted to enter the passphrase. GPG even annotates the prompt to indicate that it's from a remote connection. Once I've finished with the private keys, I go back to the window and hit enter.

http://code.v.igoro.us/posts/2015/11/remote-gpg-agent.html


Air Mozilla: Participation Call, 26 Nov 2015

Четверг, 26 Ноября 2015 г. 21:00 + в цитатник

Participation Call The Participation Call helps connect Mozillians who are thinking about how we can achieve crazy and ambitious goals by bringing new people into the project...

https://air.mozilla.org/participation-call-20151126/


Air Mozilla: Reps weekly, 26 Nov 2015

Четверг, 26 Ноября 2015 г. 19:00 + в цитатник

Reps weekly This is a weekly call with some of the Reps council members to discuss all matters Reps, share best practices and invite Reps to share...

https://air.mozilla.org/reps-weekly-20151126/


Armen Zambrano: Mozhginfo/Pushlog client released

Четверг, 26 Ноября 2015 г. 18:17 + в цитатник
Hi,
If you've ever spent time trying to query metadata from hg with regards to revisions, you can now use a Python library we've released to do so.

In bug 1203621 [1], our community contributor @MikeLing has helped us release the pushlog.py module we had written for Mozilla CI tools.

You can find the pushlog_client package in here [3] and you can find the code in here [4]

Thanks MikeLing!

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1203621
[2] https://github.com/MikeLing
[3] https://pypi.python.org/pypi/pushlog_client
[4] https://hg.mozilla.org/hgcustom/version-control-tools/rev/6021c9031bc3


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

http://feedproxy.google.com/~r/armenzg_mozilla/~3/pmk5at0ldBE/mozhginfopushlog-client-released.html


Nick Cameron: Macro hygiene in all its guises and variations

Четверг, 26 Ноября 2015 г. 03:59 + в цитатник

Note, I'm not sure of the terminology for some of this stuff, so I might be making horrible mistakes, apologies.

Usually, when we talk about macro hygiene we mean the ability to not confuse identifiers with the same name but from different contexts. This is a big and interesting topic in it's own right and I'll discuss it in some depth later. Today I want to talk about other kinds of macro hygiene.

There is hygiene when naming items (I've heard this called "path hygiene", but I'm not sure if that is a standard term). For example,

mod a {  
    fn f() {}

    pub macro foo() {
        f();
    }
}

a::foo!();  

The macro use will expand to f(), but there is no f in scope. Currently this will be a name resolution error. Ideally, we would remember the scope where the call to f came from and look up f in that scope.

I believe that switching our hygiene algorithm to scope sets and using the scope sets for name resolution solves this issue.

Privacy hygiene

In the above example, f is private to a, so even if we can name it from the expansion of foo, we still can't access it due to its visibility. Again, scope sets comes to the rescue. The intuition is that we check privacy from the scope used to find f, not from its lexical position. There are a few more details than that, but nothing that will make sense before explaining the scope sets algorithm in detail.

Unsafety hygiene

The goal here is that when checking for unsafety, whether or not we are allowed to execute unsafe code depends on the context where the code is written, not where it is expanded. For example,

unsafe fn foo(x: i32) {}

macro m1($x: expr) {  
    foo($x)
}

macro m2($x: expr) {  
    $x
}

macro m3($x: expr) {  
    unsafe {
        foo($x)
    }
}

macro m4($x: expr) {  
    unsafe {
        $x
    }
}

fn main() {  
    foo(42); // bad
    unsafe {
        foo(42);  // ok
    }
    m1(42); // bad
    m2(foo(42)); // bad
    m3(42); // ok
    m4(foo(42)); // bad
    unsafe {
        m1(42); // bad
        m2(foo(42)); // ok
        m3(42); // ok
        m4(foo(42)); // ok
    }
}

We could in theory use the same hygiene information as for the previous kinds. But when checking unsafety we are checking expressions, not identifiers, and we only record hygiene info for identifiers.

One solution would be to track hygiene for all tokens, not just identifiers. That might not be too much effort since groups of tokens passed together would have the same hygiene info. We would only be duplicating indices into a table, not more data than that. We would also have to track or be able to calculate the safety-status of scopes.

Alternatively, we could introduce a new kind of block into the token tree system - a block which can't be written by the user, only created by expansion or procedural macros. It would affect precedence but not scoping. Such a block is also the solution to having interpolated AST in the token stream - we just have tokens wrapped in the scope-less block. Such a block could be annotated with its safety-status. We would need to track unsafety during parsing/expansion to make this work. We have something similar to this in the HIR where we can push/pop unsafe blocks. I believe we want an absolute setting here rather than push/pop though, and we also don't want to introduce new scoping.

We could follow the current stability solution and annotate spans, but this is a bit of an abuse of spans, IMO.

I'm not super-happy with any of these solutions.

Stability hygiene

Finally, stability. We would like for macros in libraries with access to unstable code to be able to access unstable code when expanded. This is currently supported in Rust by having a bool on spans. We can probably continue to use this system or adapt either of the solutions proposed for unsafety hygiene.

It would be nice for macros to be marked as stable and unstable, I believe this is orthogonal to hygiene though.

http://www.ncameron.org/blog/macro-hygiene-in-all-its-guises-and-variations/


Mozilla Addons Blog: Add-ons Update – Week of 2015/11/25

Четверг, 26 Ноября 2015 г. 01:46 + в цитатник

I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

The Review Queues

In the past 3 weeks, 758 add-ons were reviewed:

  • 602 (79%) were reviewed in less than 5 days.
  • 32 (4%) were reviewed between 5 and 10 days.
  • 124 (16%) were reviewed after more than 10 days.

There are 281 listed add-ons awaiting review, and 189 unlisted add-ons awaiting review. I should note that this is an unusually large number of unlisted add-ons, which is due to a mass uploading by a developer with 100+ add-ons.

Review times for most add-ons have improved recently  due to more volunteer activity. Add-ons that are admin-flagged or very complex are now getting much needed attention, thanks to a new contractor reviewer. There’s still a fairly large review backlog to go through.

If you’re an add-on developer and would like to see add-ons reviewed faster, please consider joining us. Add-on reviewers get invited to Mozilla events and earn cool gear with their work. Visit our wiki page for more information.

Firefox 43 Compatibility

This compatibility blog post is now public. The bulk compatibility validation should be run soon.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Changes in let and const in Firefox 44

Firefox 44 includes some breaking changes that you should all be aware of. Please read the post carefully and test your add-ons on Nightly or the newest Developer Edition.

Extension Signing

The wiki page on Extension Signing has information about the timeline, as well as responses to some frequently asked questions. The current plan is to turn on enforcement by default in Firefox 43.

Electrolysis

Electrolysis, also known as e10s, is the next major compatibility change coming to Firefox. In a nutshell, Firefox will run on multiple processes now, running content code in a different process than browser code.

This is the time to test your add-ons and make sure they continue working in Firefox. We’re holding regular office hours to help you work on your add-ons, so please drop in on Tuesdays and chat with us!

Web Extensions

If you read the post on the future of add-on development, you should know there are big changes coming. We’re investing heavily on the new WebExtensions API, so we strongly recommend that you start looking into it for your add-ons. You can track progress of its development in http://www.arewewebextensionsyet.com/.

https://blog.mozilla.org/addons/2015/11/25/add-ons-update-74/


Air Mozilla: Quality Team (QA) Public Meeting, 25 Nov 2015

Четверг, 26 Ноября 2015 г. 00:30 + в цитатник

Quality Team (QA) Public Meeting This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...

https://air.mozilla.org/quality-team-qa-public-meeting-20151125/


Air Mozilla: Bugzilla Development Meeting, 25 Nov 2015

Четверг, 26 Ноября 2015 г. 00:00 + в цитатник

Bugzilla Development Meeting Help define, plan, design, and implement Bugzilla's future!

https://air.mozilla.org/bugzilla-development-meeting-20151125/


Chris H-C: How Mozilla Pays Me

Среда, 25 Ноября 2015 г. 22:08 + в цитатник

When I told people I was leaving BlackBerry and going to work for Mozilla, the first question was often “Who?”

(“The Firefox people, ${familyMember}” “Oh, well why didn’t you say so”)

More often the first question (and almost always the second question for ${familyMember}) was “How do they make their money?”

When I was working for BlackBerry, it seemed fairly obvious: BlackBerry made its money selling BlackBerry devices. (Though obvious, this was actually incorrect, as the firm made its money more through services and servers than devices. But that’s another story.)

With Mozilla, there’s no clear thing that people’s minds can latch onto. There’s no doodad being sold for dollarbucks, there’s no subscriber fee, there’s no “professional edition” upsell…

Well, today the Mozilla Foundation released its State of Mozilla report including financials for calendar 2014. This ought to clear things up, right? Well…

The most relevant part of this would be page 6 of the audited financial statement which shows that, roughly speaking, Mozilla makes its money thusly (top three listed):

  • $323M – Royalties
  • $4.2M – Contributions (from fundraising efforts)
  • $1M – Interest and Dividends (from investments)

Where this gets cloudy is that “Royalties” line. The Mozilla Foundation is only allowed to accrue certain kinds of income since it is a non-profit.

Which is why I’m not employed by the Foundation but by Mozilla Corporation, the wholly-owned subsidiary of the Mozilla Foundation. MoCo is a taxable entity responsible for software development and stuff. As such, it can earn and spend like any other privately-held concern. It sends dollars back up the chain via that “Royalties” line because it needs to pay to license wordmarks, trademarks, and other intellectual property from the Foundation. It isn’t the only contributor to that line, I think, as I expect sales of plushie Firefoxen and tickets to MozFest factor in somehow.

So, in conclusion, rest assured, ${conceredPerson}: Mozilla Foundation has plenty of money coming in to pay my…

Well, yes, I did just say I was employed by Mozilla Corporation. So?

What do you mean where does the Corporation get its money?

Fine, fine, I was just going to gloss over this part and sway you with those big numbers and how MoCo and MoFo sound pretty similar… but I guess you’re too cunning for that.

Mozilla Corporation is not a publicly-traded corporation, so there are no public documents I can point you to for answers to that question. However, there was a semi-public statement back in 2006 that confirmed that the Corporation was earning within an order of magnitude of $76M in search-related partnership revenue.

It’s been nine years since then. The Internet has changed a lot since the year Google bought YouTube and MySpace was the primary social network of note. And our way of experiencing it has changed from sitting at a desk to having it in our pockets. Firefox has been downloaded over 100 million times on Android and topped some of the iTunes App Store charts after being released twelve days ago for iOS. If this sort of partnership is still active, and is somewhat proportional to Firefox’s reach, then it might just be a different number than “within an order of magnitude of $76M.”

So, ${concernedPerson}, I’m afraid there just isn’t any more information I can give you. Mozilla does its business, and seems to be doing it well. As such, it collects revenue which it has to filter through various taxes and regulation authorities at various levels which are completely opaque even when they’re transparent. From that, I collect a paycheque.

At the very least, take heart from the Contributions line. That money comes from people who like that Mozilla does good things for the Internet. So as long as we’re doing good things (and we have no plans to stop), there is a deep and growing level of support that should keep me from asking for money.

Though, now that you mention it

:chutten


https://chuttenblog.wordpress.com/2015/11/25/how-mozilla-pays-me/


Air Mozilla: The Joy of Coding - Episode 36

Среда, 25 Ноября 2015 г. 21:00 + в цитатник

The Joy of Coding - Episode 36 mconley livehacks on real Firefox bugs while thinking aloud.

https://air.mozilla.org/the-joy-of-coding-episode-36/


Jan de Mooij: Making `this` a real binding in SpiderMonkey

Среда, 25 Ноября 2015 г. 16:38 + в цитатник

Last week I landed bug 1132183, a pretty large patch rewriting the implementation of this in SpiderMonkey.

How this Works In JS

In JS, when a function is called, an implicit this argument is passed to it. In strict mode, this inside the function just returns that value:

function f() { "use strict"; return this; }
f.call(123); // 123

In non-strict functions, this always returns an object. If the this-argument is a primitive value, it's boxed (converted to an object):

function f() { return this; }
f.call(123); // returns an object: new Number(123)

Arrow functions don't have their own this. They inherit the this value from their enclosing scope:

function f() {
    "use strict";
    () => this; // `this` is 123
}
f.call(123);

And, of course, this can be used inside eval:

function f() {
    "use strict";
    eval("this"); // 123
}
f.call(123);

Finally, this can also be used in top-level code. In that case it's usually the global object (lots of hand waving here).

How this Was Implemented

Until last week, here's how this worked in SpiderMonkey:

  • Every stack frame had a this-argument,
  • Each this expression in JS code resulted in a single bytecode op (JSOP_THIS),
  • This bytecode op boxed the frame's this-argument if needed and then returned the result.

Special case: to support the lexical this behavior of arrow functions, we emitted JSOP_THIS when we defined (cloned) the arrow function and then copied the result to a slot on the function. Inside the arrow function, JSOP_THIS would then load the value from that slot.

There was some more complexity around eval: eval-frames also had their own this-slot, so whenever we did a direct eval we'd ensure the outer frame had a boxed (if needed) this-value and then we'd copy it to the eval frame.

The Problem

The most serious problem was that it's fundamentally incompatible with ES6 derived class constructors, as they initialize their 'this' value dynamically when they call super(). Nested arrow functions (and eval) then have to 'see' the initialized this value, but that was impossible to support because arrow functions and eval frames used their own (copied) this value, instead of the updated one.

Here's a worst-case example:

class Derived extends Base {
    constructor() {
        var arrow = () => this;

        // Runtime error: `this` is not initialized inside `arrow`.
        arrow();

        // Call Base constructor, initialize our `this` value.
        eval("super()");

        // The arrow function now returns the initialized `this`.
        arrow();
    }
}

We currently (temporarily!) throw an exception when arrow functions or eval are used in derived class constructors in Firefox Nightly.

Boxing this lazily also added extra complexity and overhead. I already mentioned how we had to compute this whenever we used eval.

The Solution

To fix these issues, I made this a real binding:

  • Non-arrow functions that use this or eval define a special .this variable,
  • In the function prologue, we get the this-argument, box it if needed (with a new op, JSOP_FUNCTIONTHIS) and store it in .this,
  • Then we simply use that variable each time this is used.

Arrow functions and eval frames no longer have their own this-slot, they just reference the .this variable of the outer function. For instance, consider the function below:

function f() {
    return () => this.foo();
}

We generate bytecode similar to the following pseudo-JS:

function f() {
    var .this = BoxThisIfNeeded(this);
    return () => (.this).foo();
}

I decided to call this variable .this, because it nicely matches the other magic 'dot-variable' we already had, .generator. Note that these are not valid variable names so JS code can't access them. I only had to make sure with-statements don't intercept the .this lookup when this is used inside a with-statement...

Doing it this way has a number of benefits: we only have to check for primitive this values at the start of the function, instead of each time this is accessed (although in most cases our optimizing JIT could/can eliminate these checks, when it knows the this-argument must be an object). Furthermore, we no longer have to do anything special for arrow functions or eval; they simply access a 'variable' in the enclosing scope and the engine already knows how to do that.

In the global scope (and in eval or arrow functions in the global scope), we don't use a binding for this (I tried this initially but it turned out to be pretty complicated). There we emit JSOP_GLOBALTHIS for each this-expression, then that op gets the this value from a reserved slot on the lexical scope. This global this value never changes, so the JITs can get it from the global lexical scope at compile time and bake it in as a constant :) (Well.. in most cases. The embedding can run scripts with a non-syntactic scope chain, in that case we have to do a scope walk to find the nearest lexical scope. This should be uncommon and can be optimized/cached if needed.)

The Debugger

The main nuisance was fixing the debugger: because we only give (non-arrow) functions that use this or eval their own this-binding, what do we do when the debugger wants to know the this-value of a frame without a this-binding?

Fortunately, the debugger (DebugScopeProxy, actually) already knew how to solve a similar problem that came up with arguments (functions that don't use arguments don't get an arguments-object, but the debugger can request one anyway), so I was able to cargo-cult and do something similar for this.

Other Changes

Some other changes I made in this area:

  • In bug 1125423 I got rid of the innerObject/outerObject/thisValue Class hooks (also known as the holy grail). Some scope objects had a (potentially effectful) thisValue hook to override their this behavior, this made it hard to see what was going on. Getting rid of that made it much easier to understand and rewrite the code.
  • I posted patches in bug 1227263 to remove the this slot from generator objects, eval frames and global frames.
  • IonMonkey was unable to compile top-level scripts that used this. As I mentioned above, compiling the new JSOP_GLOBALTHIS op is pretty simple in most cases; I wrote a small patch to fix this (bug 922406).

Conclusion

We changed the implementation of this in Firefox 45. The difference is (hopefully!) not observable, so these changes should not break anything or affect code directly. They do, however, pave the way for more performance work and fully compliant ES6 Classes! :)

http://jandemooij.nl/blog/2015/11/25/making-this-a-real-binding-in-spidermonkey/


Mozilla Addons Blog: A New Firefox Add-ons Validator

Среда, 25 Ноября 2015 г. 13:30 + в цитатник

The state of add-ons has changed a lot over the past five years, with Jetpack add-ons rising in popularity and Web Extensions on the horizon. Our validation process hasn’t changed as much as the ecosystem it validates, so today Mozilla is announcing we’re building a new Add-ons Validator, written in JS and available for testing today! We started this project only a few months ago and it’s still not production-ready, but we’d love your feedback on it.

Why the Add-ons Validator is Important

Add-ons are a huge part of why people use Firefox. There are currently over 22,000 available, and with work underway to allow Web Extensions in Firefox, it will become easier than ever to develop and update them.

All add-ons listed on addons.mozilla.org (AMO) are required to pass a review by Mozilla’s add-on review team, and the first step in this process is automated validation using the Add-ons Validator.

The validator alerts reviewers to deprecated API usage, errors, and bad practices. Since add-ons can contain a lot of code, the alerts can help developers pinpoint the bits of code that might make your browser buggy or slow, among other problems. It also helps detect insecure add-on code. It helps keep your browsing fast and safe.

Our current validator is a bit old, and because it’s written in Python with JavaScript dependencies, our old validator is difficult for add-on developers to install themselves. This means add-on developers often don’t know about validation errors until they submit their add-on for review.

This wastes time, introducing a feedback cycle that could have been avoided if the add-on developer could have just run addons-validator myAddon.xpi before they uploaded their add-on. If developers could easily check their add-ons for errors locally, getting their add-ons in front of millions of users is that much faster.

And now they can!

The new Add-ons Validator, in JS

I’m not a fan of massive rewrites, but in this case it really helps. Add-on developers are JavaScript coders and nearly everyone involved in web development these days uses Node.js. That’s why we’ve written the new validator in JavaScript and published it on npm, which you can install right now.

We also took this opportunity to review all the rules the old add-on validator defined, and removed a lot of outdated ones. Some of these hadn’t been seen on AMO for years. This allowed us to cut down on code footprint and make a faster, leaner, and easier-to-work-with validator for the future.

Speaking of which…

What’s next?

The new validator is not production-quality code yet and there are rules that we haven’t implemented yet, but we’re looking to finish it by the first half of next year.

We’re still porting over relevant rules from the old validator. Our three objectives are:

  1. Porting old rules (discarding outdated ones where necessary)
  2. Adding support for Web Extensions
  3. Getting the new validator running in production

We’re looking for help with those first two objectives, so if you’d like to help us make our slightly ambitious full-project-rewrite-deadline, you can…

Get Involved!

If you’re an add-on developer, JavaScript programmer, or both: we’d love your help! Our code and issue tracker are on GitHub at github.com/mozilla/addons-validator. We keep a healthy backlog of issues available, so you can help us add rules, review code, or test things out there. We also have a good first bug label if you’re new to add-ons but want to contribute!

If you’d like to try the next-generation add-ons validator, you can install it with npm: npm install addons-validator. Run your add-ons against it and let us know what you think. We’d love your feedback as GitHub issues, or emails on the add-on developer mailing list.

And if you’re an add-on developer who wishes the validator did something it currently doesn’t, please let us know!

We’re really excited about the future of add-ons at Mozilla; we hope this new validator will help people write better add-ons. It should make writing add-ons faster, help reviewers get through add-on approvals faster, and ultimately result in more awesome add-ons available for all Firefox users.

Happy hacking!

https://blog.mozilla.org/addons/2015/11/25/a-new-firefox-add-ons-validator/


Matjaz Horvat: Meet Jarek, splendid Pontoon contributor

Среда, 25 Ноября 2015 г. 12:43 + в цитатник

Some three months ago, a new guy named jotes showed up in #pontoon IRC channel. It quickly became obvious he’s running a local instance of Pontoon and is ready to start contributing code. Fast forward to the present, he is one of the core Pontoon contributors. In this short period of time, he implemented several important features, all in his free time:

Top contributors. He started by optimizing the Top contributors page. More specifically, he reduced the number of DB queries by some 99%. Next, he added filtering by time period and later on also by locale and project.

User permissions. Pontoon used to rely on the Mozillians API for giving permissions to localizers. It turned out we need a more detailed approach with team managers manually granting permission to their localizers. Guess who took care of it!

Translation memory. Currently, Jarek is working on translation memory optimizations. Given his track record, our expectations are pretty high. :-)

I have this strange ability to close my eyes when somebody tries to take a photo of me, so on most of them I look like a statue of melancholy. :D

What brought you to Mozilla?
A friend recommended me a documentary called Code Rush. Maybe it will sound stupid, but I was fascinated by the idea of a garage full of fellow hackers with power to change the world. During one of the sleepless nights I visited whatcanidoformozilla.org and after a few patches I knew Mozilla is my place. A place where I can learn something new with help of many amazing people.

Jarek 'Smiejczak, thank you for being splendid! And as you said, huge thanks to Linda – love of your life – for her patience and for being an active supporter of the things you do.

To learn more about Jarek, follow his blog at Joyful hackin’.
To start hackin’ on Pontoon, get involved now.

http://horv.at/blog/meet-jarek-splendid-pontoon-contributor/


Tarek Ziad'e: Should I use PYTHONOPTIMIZE ?

Среда, 25 Ноября 2015 г. 10:50 + в цитатник

Yesterday, I was reviewing some code for our projects and in a PR I saw something roughly similar to this:

try:
    assert hasattr(SomeObject, 'some_attribute')
    SomeObject.some_attribute()
except AssertionError:
    SomeObject.do_something_else()

That didn't strike me as a good idea to rely on assert because when Python is launched using the PYTHONOPTIMIZE flag, which you can activate with the eponymous environment variable or with -O or -OO, all assertions are stripped from the code.

To my surprise, a lot of people are dismissing -O and -OO saying that no one uses those flags in production and that a code containing asserts is fine.

PYTHONOPTIMIZE has three possible values: 0, 1 (-O) or 2 (-OO). 0 is the default, nothing happens.

For 1 this is what happens:

  • asserts are stripped
  • the generated bytecode files are using the .pyo extension instead of .pyc
  • sys.flags.optimize is set to 1
  • __debug__ is set to False

And for 2:

  • everything 1 does
  • doctsrings are stripped.

To my knowledge, one legacy reason to run -O was to produce a more efficient bytecode, but I was told that this is not true anymore.

Another behavior that has changed is related to pdb: you could not run some step-by-step debugging when PYTHONOPTIMIZE was activated.

Last, the pyo vs pyc thing should go away one day, according to PEP 488

So what does that leaves us ? is there any good reason to use those flags ?

Some applications leverage the __debug__ flag to offer two running modes. One with more debug information or a different behavior when an error is encoutered.

That's the case for pyglet, according to their doc.

Some companies are also using the -OO mode to slighlty reduce the memory footprint of running apps. It seems to be the case at YouTube.

And it's entirely possible that Python itself in the future, adds some new optimizations behind that flag.

So yeah, even if you don't use yourself those options, it's good practice to make sure that your python code is tested with all possible values for PYTHONOPTIMIZE.

It's easy enough, just run your tests with -O and -OO and without, and make sure your code does not depend on doctsrings or assertions.

If you have to depend on one of them, make sure your code gracefully handles the optimize modes or raises an early error explaining why you are not compatible with them.

Thanks to Brett Cannon, Michael Foord and others for their feedback on Twitter on this.

http://blog.ziade.org/2015/11/25/should-i-use-pythonoptimize/


Nick Cameron: Macro plans, overview

Среда, 25 Ноября 2015 г. 02:05 + в цитатник

In this post I want to give a bit of an overview of the changes I'm planning to propose for the macro system. I haven't worked out some of the details yet, so this could change a lot.

To summarise, the broad thrusts of the redesign are:

  • changes to the way procedural macros work and the parts of the compiler they have access to,
  • change the hygiene algorithm, and what hygiene is applied to,
  • address modularisation issues,
  • syntactic changes.

I'll summarise each here, but there will probably be a blog post about each before a proper RFC. At the end of this blog post I'll talk about backwards compatibility.

I'd also like to support macro and ident inter-operation better, as described here.

Procedural macros

Mechanism

I intend to tweak the system of traits and enums, etc. to make procedural macros easier to use. My intention is that there should be a small number of function signatures that can be implemented (not just one unfortunately, because I believe function-like vs attribute-like macros will take different arguments, furthermore I think we need versions for hygienic expansion and expansion with DIY-hygiene, and the latter case must be supplied with some hygiene information in order for the function to do it's own hygiene. I'm not certain that is the right approach though). Although this is not as Rust-y as using traits, I believe the simplicity benefits outweigh the loss in elegance.

All macros will take a set of tokens in and generate a set of tokens out. The token trees should be a simplified version of the compiler's internal token trees to allow procedural macros more flexibility (and forwards compatibility). For attribute-like macros, the code that they annotate must still parse (necessary due to internal attributes, unfortunately), but will be supplied as tokens to the macro itself.

I intend that libsyntax will remain unstable and (stable) procedural macros will not have direct access to it or any other compiler internals. We will create a new crate, libmacro (or something) which will re-export token trees from libsyntax and provide a whole bunch of functionality specifically for procedural macros. This library will take the usual path to stabilisation.

Macros will be able to parse tokens and expand macros in various ways. The output will be some kind of AST. However, after manipulating the AST, it is converted back into tokens to be passed back to the macro expander. Note that this requires us storing hygiene and span information directly in the tokens, not the AST.

I'm not sure exactly what the AST we provide should look like, nor the bounds on what should be in libmacro vs what can be supplied by outside libraries. I would like to start by providing no AST at all and see what the eco-system comes up with.

It is worth thinking about the stability implications of this proposal. At some point in the future, the procedural macro mechanism and libmacro will be stable. So, a crate using stable Rust can use a crate which provides a procedural macro. At some point later we evolve the language in a non-breaking way which changes the AST (internal to libsyntax). We must ensure that this does not change the structure of the token trees we give to macros. I believe that should not be a problem for a simple enough token tree. However, the procedural macro might expect those tokens to parse in a certain way, which they no longer do causing the procedural macro to fail and thus compilation to fail. Thus, the stability guarantees we provide users can be subverted by procedural macros. However, I don't think this is possible to prevent. In the most pathological case, the macro could check if the current date is later than a given one and in that case panic. So, we are basically passing the buck about backwards compatibility with the language to the procedural macro authors and the libraries they use. There is an obvious hazard here if a macro is widely used and badly written. I'm not sure if this can be addressed, other than making sure that libraries exist which make compatibility easy.

Libraries

I hope that the situation for macro authors will be similar to that for other authors: we provide a small but essential standard library (libmacro) and more functionality is provided by the ecosystem via crates.io.

The functionality I expect to see in libmacro should be focused on interaction with the rest of the parser and macro expander, including macro hygiene. I expect it to include:

  • interning a string and creating an ident token from a string
  • creating and manipulating tokens
  • expanding macros (macro_rules and procedural), possibly in different ways
  • manipulating the hygiene of tokens
  • manipulating expansion traces for spans
  • name resolution of module and macro names - note that I expect these to return token trees, which gives a macro access to the whole program, I'm not sure this is a good idea since it breaks locality for macros
  • check and set feature gates
  • mark attributes and imports as used

The most important external libraries I would like to see would be to provide an AST-like abstraction, parsing, and tools for building and manipulating AST. These already exist (syntex, ASTer), so I am confident we can have good solutions in this space, working towards crates which are provided on crates.io, but are officially blessed (similar to the goals of other libraries).

I would very much like to see quasi-quoting and pattern matching in blessed libraries. These are important tools, the former currently provided by libsyntax. I don't see any reason these must be provided by libmacro, and since quasi-quoting produces AST, they probably can't be (since they would be associated with a particular AST implementation). However, I would like to spend some time improving the current quasi-quoting system, in particular to make it work better with hygiene and expansion traces.

Alternatively, libmacro could provide quasi-quoting which produces token trees, and there is then a second step to produce AST. Since hygiene info will operate at the tokens level, this might be possible.

Pattern matching on tokens should provide functionality similar to that provided by macro_rules!, making writing procedural macros much easier. I'm convinced we need something here, but not sure of the design.

Naming and registration

See section on modularisation below, the same things apply to procedural macros as to macro_rules macros.

A macro called baz declared in a module bar inside a crate foo could be called using ::foo::bar::baz!(...) or imported using use foo::bar::baz!; and used as baz!(...). Other than a feature flag until procedural macros are stabilised, users of macros need no other annotations. When looking at an extern crate foo statement, the compiler will work out whether we are importing macros.

I expect that functions expected to work as procedural macros would be marked with an attribute (#[macro] or some such). We would also have #[cfg(macro)] for helper functions, etc. Initially, I expect a whole crate must be #[cfg(macro)], but eventually I would like to allow mixing in a crate (just as we allow macro_rules macros in the same crate as normal code).

There would be no need to register macros with the plugin registry.

A vaguely related issue is whether interaction between the macros and the compiler should be via normal function calls (to libmacro) or via IPC. The latter would allow produral macros to be used without dynamic linking and thus permit a statically linked compiler.

Hygiene

I plan to change the hygiene algorithm we use from mtwt to sets of scopes. This allows us to use hygiene information in name resolution, thus alleviating the 'absolute path' problem in macros. We can also use this information to support hygienic checking of privacy. I'll explain the algorithm and how it will apply to Rust in another blog post. I think this algorithm will be easier for procedural macro authors to work with too.

Orthogonally, I want to make all identifiers hygienic, not just variables and labels. I would also like to support hygienic unsafety. I believe both these things are more implementation than design issues.

Modularisation

The goal here is to treat macros the same way as other items, naming via paths and allowing imports. This includes naming of attributes, which will allow paths for naming (e.g., #[foo::bar::baz]). Ordering of macros should also not be important. The mechanism to support this is moving parts of name resolution and privacy checking to macro expansion time. The details of this (and the interaction with sets of scopes hygiene, which essentially gives a new mechanism for name resolution) are involved.

Syntax

These things are nice to have, rather than core parts of the plan. New syntax for procedural macros is covered above.

I would like to fix the expansion issues with arguments and nested macros, see blog post.

I propose that new macros should use macro! rather than macro_rules!.

I would like a syntactic form for macro_rules macros which only matches a single pattern and is more lightweight than the current syntax. The current syntax would still be used where there are multiple patterns. Something like,

macro! foo(...) => {  
    ...
}

Perhaps we drop the => too.

We need to allow privacy annotations for macros, not sure the best way to do this: pub macro! foo { ... } or macro! pub foo { ... } or something else.

Backwards compatability

Procedural macros are currently unstable, there will be a lot of breaking changes, but the reward is a path to stability.

macro_rules! is a stable part of the language. It will not break (modulo usual caveat about bug fixes). The plan is to introduce a whole new macro system around macro!, if you have macros currently called macro!, I guess we break them (we will run a warning cycle for this and try and help anyone who is affected). We will deprecate macro_rules! once macro! is stable. We will track usage with the intention of removing macro_rules at 2.0 or 3.0 or whatever. All macros in the standard libraries will be converted to using macro!, this will be a breaking change, we will mitigate by continuing to support the old but deprecated versions of the macros. Hopefully, modularisation will support this (needs more thought to be sure). The only change for users of macros will be how the macro is named, not how it is used (modulo new applications of hygiene).

Most existing macro_rules! macros should be valid macro! macros. The only difference will be using macro! instead of macro_rules! and the new scoping/naming rules may lead to name clashes that didn't exist before (note this is not in itself a breaking change, it is a side effect of using the new system). Macros converted in this way should only break where they take advantage of holes in the current hygiene system. I hope that this is a low enough bar that adoption of macro! by macro_rules! authors will be quick.

Hygiene

There are two backwards compatibility hazards with hygiene, both affect only macro_rules! macros: we must emulate the mtwt algorithm with the sets of scopes algorithm, and we must ensure unhygienic name resolution for items which are currently not treated hygienically. In the second case, I think we can simulate unhygienic expansion for types etc, by using the set of scopes for the macro use-site, rather than the proper set. Since only local variables are currently treated hygienically, I believe this means the first case will Just Work. More details on this in a future blog post.

http://www.ncameron.org/blog/macro-plans-overview/


Air Mozilla: Privacy for Normal People

Вторник, 24 Ноября 2015 г. 21:00 + в цитатник

Privacy for Normal People Mozilla cares deeply about user control. But designing products that protect users is not always obvious. Sometimes products give the illusion of control and security...

https://air.mozilla.org/privacy-for-normal-people/


Armen Zambrano: Welcome F3real, xenny and MikeLing!

Вторник, 24 Ноября 2015 г. 20:58 + в цитатник
As described by jmaher, we started this week our first week of mozci's quarter of contribution.

I want to personally welcome Stefan, Vaibhav and Mike to mozci. We hope you get to learn and we thank you for helping Mozilla move forward in this corner of our automation systems.

I also want to give thanks to Alice for committing at mentoring. This could not be possible without her help.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

http://feedproxy.google.com/~r/armenzg_mozilla/~3/Gc62FTpVAgI/welcome-f3real-xenny-and-mikeling.html


Armen Zambrano: Mozilla CI tools meet up

Вторник, 24 Ноября 2015 г. 20:52 + в цитатник
In order to help the contributors' of mozci's quarter of contribution, we have set up a Mozci meet up this Friday.

If you're interested on learning about Mozilla's CI, how to contribute or how to build your own scheduling with mozci come and join us!

9am ET -> other time zones
Vidyo room: https://v.mozilla.com/flex.html?roomdirect.html&key=GC1ftgyxhW2y


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

http://feedproxy.google.com/~r/armenzg_mozilla/~3/-YHtwuAvPug/mozilla-ci-tools-meet-up.html



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 219 218 [217] 216 215 ..
.. 1 Календарь