2017/05/08

Mixing SimpleUploads with other drag&drop scripts

In the SimpleUploads plugin, one of the features is to provide a way for the users to add files to their content by simply dragging them from their desktop. Of course, there might be more than one editor on the page and so the files are accepted only when they are dropped on an editor.

But there's a little problem: if the user drops the file outside the editor they may lose their current content because that image is loaded instead (yes, you can use autosave and also prompt them with onBeforeUnload), so in order to prevent data loss or the delay and surprise about having to go back and restore last saved draft, I implemented a little check that rejects any file that it's dropped outside the editor.

So does this fix all the problems?
Not of course!
If you want to provide support to drag&drop in other part of the page, I'm preventing that script to work but the solution is simple:

In the configuration of your CKEditor, add this extra setting:
simpleuploads_allowDropOutside = true;

That's all. If the that setting exists I won't touch anything outside the editor and you can keep working as usual.

2017/05/05

Should you use Windows S?

TL:DR:
No.

Ok, after clearing that up, let's talk about what's Windows S and why it has a positive value (but maybe not for you).

Windows has been tied forever with back-compatibility, support for all the legacy apps and APIs that people have been using almost since Win95.

Every API is an additional support burden and a potential attack vector, and so we find that despite all the years, code review, tests and whatever, new bugs always popup.

So, is it really strange that Microsoft would like to close the attack surface as much as possible by allowing access only to a very limited API that removes low level calls and the ability to run any program that you get from any source?

Obviously they can't do it that for most of the people right now because most of the apps use the Win32 API and so the people at their homes would reject outright to even test that Windows if they know that it won't run some app that they use.

But on the other hand, there's a lot of people quite happy with their iPads and Chromebooks that doesn't run any Windows app at all and they boast how great they are because they don't get virus or malware there, and system administrators at school check the landscape and see one set of computers that are full Windows with all the extra maintenance that they might require and at the other side this restricted versions where everything is locked down, they control what's there and they know that the device won't run the traditional malware that it's sent in emails or injected through an evil ad.

They are the target of this Windows S, cheap computers that can be managed easily and can run any UWP or Win32 apps distributed through the Windows Store.

We know that most of the apps that people use aren't in the Store, but this might be the kind of incentive (a whole set of new computers sold to schools by the thousands) to port those apps to UWP and little by little people might found there more and more apps, and so in the future it might be possible for Microsoft to enable an optional lock down of every Windows computer so only approved apps are run there and everyone (except antivirus vendors) will be happy knowing that their computers are safer that way.

Microsoft currently has to fight an uphill battle to be relevant 5-10 years from now. Most of the people now browse mostly from their phones and tablets and they have lost this first battle to have a mobile OS that people use, and if they give up completely they might end up with a very marginal part of the whole OS.

So it really makes sense for them to do bold moves like this one and with the current set of existing frameworks to provide cross-platform solutions (Cordova, Electron, React Native, ...) then it wouldn't be surprising to find out that the ones that still aren't able to target UWP get proper support and everybody wins this way.


2017/05/02

On performance and themes

I like to read about the topic of web performance, try to understand how things work and what are the correct patterns to use or avoid on the web. This means that I try to focus on using optimized javascript and css, don't include huge libraries and dead code that it isn't used.


But on the other hand it's clear that there are lots of CMS like WordPress, and shops like PrestaShop, both provide support for themes, so designers use Photoshop, slice that up, and generate a ton of Javascript and CSS by picking all the libraries, components and whatever they need.

There's no worry about file size, page performance or anything like that, it's just a matter of make it look nice, not make it look nice and work in an optimized way. And people prefer a nice looking site even if it takes slightly longer to load that one that has no design or is using outdated styles.

Recently I looked at some page templates trying to find a nice looking one for a NGO, and after reviewing several ones, I thought that I had found a good one, but my heart felt when I found that it was created by mixing several css files that are loaded on demand and all the responsiveness is achieved with javascript that modifies the DOM and changes the css files loaded according to the resize of the window. Yes, not even a single media query rule, all done with javascript.

So I threw it all away, started with a clean page and I was able to create my "design" mixing things from here and there, starting with a mobile-first approach for the first time and the outcome is a simple page with the required styles and scripts that I can keep on improving, a fraction of the size of any of those designs that I looked at.

Obviously the drawback is that in order to do this I had to spent my time, so it's easy to understand why for many sites the answer is to use those kind of themes. It's just a matter of finding the one that you like, pay for it once and you're ready to go, but it would be great if the pages hosting themes could provide some help to highlight themes with good performance and correct use of the new technologies.

2017/04/09

Protection against bad SSL certs

Again, trying to use Twitter to express ideas is a bad place, 140 chars is too short and the sentences might get broken.

Let's start with this tweet from Bryan Ford. It links to an article that explains how a band of attackers were able to get full control of a Brazilian bank site thanks to altering the DNS records. They created a copy of the pages and got new SSL certs (we guess that the article is wrong about those 6 month old certificates from Let's Encrypt, that doesn't make sense. They are valid only for 90 days and they could have created them in a few seconds after taking over the DNS)

So losing control of DNS is really a big problem, even when they realized of the problem, they had to "fight" with NIC.br to recover control of their account and restore the proper DNS.

So what are possible solutions about this problem?
I think that something along HPKP (HTTP Public Key Pinning) is part of the answer. If everything worked correctly, the browsers would have noticed that the cert is wrong and then refused to load the page so visitors wouldn't have entered their credentials.

Bryan replied that HPKP has several problems and as you can read, it's hardly used.
So maybe the answer is not HPKP as-is now, but something developed to take into account new attacks.

Nowadays getting a HTTPS cert is finally easy thanks to Let's Encrypt, but there are still other kind of certs like EV SSL that provides verification of the company that runs the website. They aren't cheap, they require time and effort to get them, so maybe they are the starting point to get extra protection, not just showing a green url bar.

Let's say that all EV Certs are logged to a central repository (or multiple redundant copies), and that repository is the base for a new HPKP so it can't be abused by people trying to pin a free SSL cert that they got as soon as they took control of your server or your DNS. This new pinning would allow to protect those special sites that have worked and paid for a Cert that provides greater security to their users and the browsers would help to reach that goal.
A second way to use that central repository it would be that any CA should check it before issuing a new Cert. If a company has a EV Cert issued, Why would they want now a free SSL? Have they gone bankrupt? Or maybe they aren't the one requesting the new Cert? So this could close the hole that allows any CA to issue a Cert for an attacked domain.

Carlos Ferreira has replied about CAA records, but I fail to see how this is useful at all in the long run.
  1. The attacker doesn't have control of your server or your DNS. Then this will prevent them to get a SSL Cert, but I don't think that they could really get a Cert from any CA, maybe I'm wrong.
  2. The attacker has control of your server and is able to request new SSL certs. Why would they do that? If they are in your server, then they just can use your existing cert, they don't need to add a new one or create new ones. 
  3. The attacker has control of your DNS. Then they can control CAA as they please and there's no protection at all.
So that leads to his other reply about watching the DNS records with a tool like DNS Spy. Yeah, that can be useful to notice an attack, but I guess that by the time they got the mail (to a different domain of course) about the modified DNS, the admin of the attacked domain might have already noticed some problems and anyway it's just trying to do damage control instead of getting protection like it would have happened if the attackers wouldn't have been able to get certs for their fake servers. So yes, watching DNS is useful but it's not the solution.

There are many technologies around web security, some are old and trusted, others are proposals that didn't reach momentum for whatever reason. I'm an outsider so I can't really provide a full list of ways to get proper protection, but I feel that there are ways to get better security like promised by HPKP without risking basic sites.

2016/07/03

Getting a Google Maps API Key

On June 22th Google announced that from that day on, every new implementation of the Maps API requires the usage of an API key.

This is very important for anyone that wants to use my Google Maps plugin for CKEditor, because now you must get your key in order to use it.
The basic usage of the API allows 25.000 free maps loads per day, and you can have one key for each domain that you want to use. From that point you'll have to get a paid license. This is more or less the same, they have adjusted the way that somethings are counted but the important part is that previously the free usage encouraged signing the requests with an API key but now it's a forced requirement.

Getting an API key isn't too hard because the process has been streamlined and you mostly have to agree to the Terms and Conditions, you can find here their instructions, but I'm gonna provide some screenshots so you can view how easy it is.

Step by step guide

First, go to https://console.developers.google.com/flows/enableapi?apiid=maps_backend&keyType=CLIENT_SIDE&reusekey=true

You'll get a screen like this:
As this is probably a new project you just have to click Continue.
Now you'll get some notifications and progress and you'll end up with a screen similar to this one
This doesn't look right, the "You don't have permission to create an API key" message is strange, but the fact is that the "Create" button is enabled and it works, so you can define the allowed Referrer sites, or leave that blank now and adjust it later.
Click "Create" and then you'll get your API key


Click the Copy icon at its right side and you're almost done. If you have already the Google Maps plugin, then open the CKEditor configuration file, add a new entry "googleMaps_ApiKey" and then assign there the value that you got:
Now if you load CKEditor with the Google Maps plugin, the Maps dialog should work, but the static images will fail, this is because we have enabled the Google Maps API, but you need also to enable the usage of the Static Maps API with this key (and also the Street View API if you want to allow your users to use a StreetView image as the preview)

So open https://console.developers.google.com/apis/api/static_maps_backend?project=_  and this time, instead of creating a new project we will use the one that has been created previously:
Then click the Enable button
And this step is done, just repeat it for the Street View API in this link:
https://console.developers.google.com/apis/api/street_view_image_backend?project=_

Then the Geocoding so the searches also work in the dialog:
https://console.developers.google.com/apis/api/geocoding_backend?project=_

And this is over!

Summary

Please, keep in mind that these steps are the current ones as of July 2016, Google might change things or even depending on some settings on your account you might see different things, but the end goal is to get a Google Maps API key, enable also the usage of that key for the Static Maps and then put it in a googleMaps_ApiKey entry in the configuration of your CKEditor instance.

Additional notes

I think that the first time that you try to get an API key you'll get this screen:
and from them on when you try to get another key for a new domain, the dialog that it's used is the one that I've shown at first.

Also, at the top of the screen you might get a banner to sign up for Google Cloud, but if you plan to stay within the free plan limits you don't need that.


2016/05/23

How to generate unique file names with SimpleUploads

If for any reason you can't change the server back-end that saves your file uploads in CKEditor and you want to prevent the overwriting of existing files with new ones that have the same file names, you can add this code to your page to generate a unique filename for each upload (adjust it to your tastes)

CKEDITOR.on('instanceReady', function (e) {
 e.editor.on('simpleuploads.startUpload', function (ev) {
  var filename = ev.data.name;
  //var extension = filename.match(/\.\w+$/)[0];
  var newName = CKEDITOR.plugins.simpleuploads.getTimeStampId() + '_' + filename;
  ev.data.name = newName;
 });
});

2016/05/21

Debugging client-side and server-side

In the Google I/O 2016, one of the sessions was dedicated to what's coming to the Chrome Dev Tools.
Besides the improved features in the tools themselves, they also announced that they are planning to enable debugging of Node.js from Chrome, this way Node developers can use Chrome to debug both the client side as well as the server side once that pull request is accepted.

On the other side, Microsoft announced on February the ability to debug Chrome from VSCode. By using the Chrome Debugger protocol, they have created a VSCode extension that connects with Chrome and enables you to debug your script from your editor.

This reminds me of the feature of VS that integrated with IE11 and below so that when you started debugging a project, besides debugging the server side, the javascript debugger in IE itself was disabled and any error was launched in VS.

I must confess that I always hated that behavior, when I'm debugging a web page I'm not looking only at the Javascript, I must check the DOM to verify if the elements exist, check their attributes, view how does the page react to changes, etc... so a debugger that only allows me to look at the javascript is a bad option and when I had to debug IE I launched a new instance that wasn't hooked to VS so I could use the F12 tools of IE.

I guess that this must be useful for some people or they wouldn't have spend the time to make it work with Chrome, but I really can't see how using VSCode for Javascript is any better than using only the Chrome Dev Tools as they are constantly updated and improved and I wouldn't say that they are missing important things to debug JS. To debug client-side I certainly prefer the client-side tools to keep all the context, I'm not looking only at a JS file.

So going back to debugging Node from Chrome, I guess that it might depend on the quality of your editor (I would say that some people use very bad editors). Until Chrome becomes a full IDE, you're still using another program to write your JS, that includes plugins, it's adjusted to your taste, integrated with other tools... If it's able to debug Node itself, then I think that I would prefer to do that there instead of using Chrome, but obviously this depends on the quality of that debug experience. I can understand that the context provided in this situation can be similar by the editor and Chrome, although on one hand your editor might be able to provide better context and Chrome might have better debugging tools.

The only fear is that people is focusing too much on Chrome, and so we might see soon that any other browser dies because the web developers don't test them, users find problems and are told to use Chrome instead, then the statistics say that people only use Chrome and more developers focus their testing only on Chrome and we end in IE6 land: A browser monoculture.