The financial industry, the target of more and more costly attacks on the DNS

The financial services industry, the target of more and more costly attacks on the DNS
Image source: JimBear via Pixabay

Financial services companies are particularly affected by cyberattacks. They possess a wealth of information on the customers, protect their money and provide essential services which must be available day and night. They are a lucrative target. Among the favored lines of attacks: the DNS.

The Efficient IP’s Global DNS threat annual report shows a constant growth of the DNS attacks’ number and the financial impacts, with an average financial loss of 1.2 million euros in 2019. This amount was estimated at 513 000€ in 2017 and 806 000€ in 2018.

If all the industries are affected by cyberattacks, 82% of the companies surveyed have been affected and 63% have suffered a traffic disruption, the financial industry pays a more important price with 88% of impact. Conducted with 900 persons from nine countries of North America, Europe and Asia, the study indicates that financial companies suffered 10 attacks in average during the 12 last months, i.e. an increase of 37% compared to last year.

The increase of the costs is only one of the DNS attacks’ consequences for the financial services industry. The most common impacts are the cloud services’ downtime, experienced by 45% of financial organizations, and internal applications downtime (68%). Furthermore, 47% of financial companies have been the victims of frauds by phishing attacks aiming the DNS.

The survey clearly shows the insufficient security measures implemented for the DNS securing. The delay in applying security patches is a major problem for the organizations of this industry. In 2018, 72% of the interviewed companies admitted that a 3 days’ delay was necessary to implement a security patch in their systems, 3 days during which they are exposed to attacks.

Only 65% of the financial institutions use or plan to integrate a trusted DNS architecture, they seem to be always late and not to be sufficiently aware of the risks associated to this central point of their infrastructure. The evolution of the threats on the DNS is constant, the attacks are many and complex. It is essential to quickly react to better protect yourself.

Industry, trade, media, telecom, health, education, government, service… many others sectors are affected by the attacks. Some solutions exist. ANSSI publishes every year the guide of good practices regarding the DNS resilience, which details many recommendations in order to be protected. Relying on an Anycast network; possessing a protection system against DDoS attacks; having a monitoring of DNS traffic and a team able to take action quickly; possessing an efficient security policy … As many measures essential to the resilience and efficiency of the DNS network against these damaging attacks in terms of financial and image impact.

Hoping to see at last better figures in the 2020 report.

Soon a maximum duration of one year for SSL certificates?

Soon a maximum duration of one year for SSL/TLS certificates?

What is happening?

The industry actors plan to reduce the lifetime of SSL/TLS certificates, allowing the HTTPS display in browsers, to 13 months, i.e. almost half of the present lifetime of 27 months, in order to improve security.

Google through the CA/Browser Forum has indeed proposed this modification, approved by Apple and a Certification Authority, making it eligible to vote. During the next CA/B Forum meetings, if the vote is accepted, the modification of the requirements will come into effect in March 2020. Any certificate issued after the entry into force date will have to respect the requirements of the shortened validity period.

The aim for this reduction is to complicate things for cyber attackers by reducing the duration of the use of the potentially stolen certificates. It could also force companies to use the most recent and the most secured available encrypting algorithms.

If the vote fails, it’s not to be excluded that browsers supporting this requirement, unilaterally implement it in their root program, thus forcing the change to the Certification Authorities. It’s likely that this could be the case, this change follows Google’s precedent initiative that aimed to reduce the lifespan from three years to two years in 2018, period during which Google already wished to reduce it to 13 months or even less.

Who is impacted?

The changes proposed by Google would have an impact on all the users of TLS certificates of public trust, regardless of the Certification Authority that issued the certificate. If the vote passes, all certificates issued or reissued after March 2020 will have a maximum validity of 13 months. The companies using certificates with a validity period superior to 13 months will be encouraged to reconsider their systems and evaluate the impact of the proposed modifications on their implementation and their use.

The TLS certificates issued before March 2020 with a validity period superior to 13 months will stay operational. The public non-TLS certificate, for the code signing, the TLS private code and clients’ certificates, etc. are not concerned.  It will not be necessary to revoke an existing certificate following the implementation of the new standard. The reduction will have to be applied during the renewal.

What do the market players think about this?

It would be a global change for the industry with impacts on all the Certification Authorities. They view this proposition in a negative light. We can see an economic interest above all, but not solely…

The main argument is that the market is not ready in terms of automation system of orders and certificates implementations. Indeed, there would be more human interventions with the risks associated with poor handling, or simply a higher risk of forgetting a certificate renewal.

For Certification Authorities, reducing the certificates’ lifespan to such a short term mainly presents an increase of the human costs related to the certificate portfolio management. If they are not fundamentally against this decision, they would particularly like more time to study what users and companies think.

The position of browsers makers

Be it Google or Mozilla, the spearheads of the native HTTPS massive adoption for all websites and the supporters of the Let’sEncrypt initiative, what is important is the encrypting of all web traffic. A reduction of the certificates lifespan reduces the risk of certificates theft on a long period and encourages the massive adoption of automated management systems. For these two actors, an ideal world would have certificate of maximum 3 months. If they are attentive to the market as to not impose their views too quickly, it is more than likely that in the long term the certificates’ lifespan will continue to decrease.

Nameshield’s opinion 

The market continues its evolution towards shorter and shorter certificates’ validity, as a continual decrease of the authentication levels and consequently a need for management automated solutions that will increase. We will align on these requirements and advise our customers to prepare themselves for this reduction which will, without a doubt, arrive. Our Certification Authorities partners will also follow this evolution and will allow to provide all systems of required permanent inventory and automation.

To be heard

The CA/Browser Forum accepts comments of external participants and all discussions are public. You can directly enter your comments to the Forum distribution list:  https://cabforum.org/working-groups/ (at the bottom of the page). Nameshield is in contact with CA/Browser Forum participants and will inform you of the future decisions.

The Nameshield SSL interface has had a complete makeover

The Nameshield SSL interface has had a complete makeover

More user-friendly, more comprehensive, more attractive… our brand new and improved Nameshield SSL interface is being launched on Thursday, June 13th allowing you to manage all of your certificates.

You will now have access to key metrics on your certificate portfolio, to different certificate lookup views (such as complete portfolio, detailed overview, certificates nearing expiry, pending orders, expired or revoked certificates), to an Organization and Contact management tool and a redesigned ordering system.

Lastly, a decision support tool has been included in the interface to help you choose the certificate that’s right for your needs.

The certificate range has been updated to cover all types of certificates, SSL, RGS, Code Signing, Individual certificates and with all levels of authentication.

The SSL team remains at your disposal for a demonstration and a complete user guide is available covering all possible operations and actions.

Contact us directly at certificates@nameshield.net.

The Black swan time?

IoT-  The Black swan time?
Image source: abudrian via Pixabay

The actors and utility providers invade the connected world, benefiting from the innovations that the rest of the world opportunely provides them. It wouldn’t be a problem if we didn’t live in an age where hacking a power plant became possible.

In 2015 and 2016, hackers shut down power to thousands of users in the middle of the Ukrainian winter. Since then, the American government openly admitted that foreign powers tried every day to take control of the energy grid control rooms of the United States. And this is important because we are currently connecting decades old infrastructures in an environment which is swimming with threats that it was never designed to protect against.

Engineers have not always played well with computer scientists. These disciplines are different, they are different mindsets with different aims, different cultures and of course, different technologies. Engineers can plan for accidents and failures, while cybersecurity professionals plan for attacks. There are completely different industry standards for each discipline and very few standards for the growing field of the Internet of Things (IoT), which is increasingly weaving its way into utility environments. Those two worlds are now colliding.

Much of the IT used in utilities infrastructure was previously isolated, operating without fear of the hackers, with systems built for availability and convenience, not for security. Their creators didn’t consider how a user might have to authenticate to a network to prove that they are a trusted actor. That might have been acceptable in the past, but now we have a landscape littered with outdated machines weighed down with insecure codes that are unequipped for modern IT threats. The upgrading of these systems and the security afterward, won’t solve all those security problems and replacing them entirely would be too expensive, difficult to envisage and almost utopian for many. And today, this is a real problem to connect them in an environment exposed to threats and adversaries searching for the next easy target.

Today, the world tends to connect more and more, particularly through Internet of Things (IoT), we talk about connected cars, baby monitors connected to a parent’s smartphone and doorbells informing homeowners who is at their doors, fridges, washing machines become connected… and utilities follow the trends, naturally wanting to be part of this world’s evolution towards the increasing computerisation of physical objects.

Exciting as these new innovations might sound, evidence mounts every day of the IoT’s insecurity. Whether it’s hardcoded passwords, an inability to authenticate its outward and inward connections or an inability to update, there is little argument about their security. These products are often rushed to market without a thought for this important factor.

Enterprises and governments are seizing the IoT as a way to transform the way they do business, and utilities are doing the same. Large infrastructures will increasingly be made up of IoT endpoints and sensors – able to relay information to its operators and radically improve the overall function of utilities.

Unfortunately, in the rush to innovation, eager adopters often ignore the glaring security problems that shiny new inventions often bring with them. In an industrial or utilities environment the IoT means something that is similar at a descriptive level, but radically different in real-world impact. A connected doll is one thing, a connected power plant is another entirely!

The risks on utilities are real. There are plenty of examples. Stuxnet, the virus which destroyed the Iranian nuclear program is just one. The aforementioned attacks on the Ukrainian power grid could be another. Furthermore Western governments, including France, now admit that foreign actors are attempting to hack their utilities on a daily basis.

But if this is such a big problem, you might ask, then why hasn’t it happened more often? Why haven’t we heard about such potentially devastating attacks even more? Well, the fact is that many won’t know they’ve already been hacked. Many organizations go for weeks, months and often years without realizing that an attacker has been lurking within their systems. The Ponemon Institute has found that the average time between an organization being breached and the discovery of that fact is 191 days, nearly half a year. This is especially true if one of those aged legacy systems has no way of telling what is anomalous. Others may just hide their breach, as many organizations do. Such attacks are often embarrassing, especially with the regulatory implications and public backlash that a cyberattack on a utility brings with it.

Furthermore, most attacks are often not catastrophic events. They are commonly attempts to gain data or access to a critical system. For most, that’s a valuable enough goal to pursue. Edging into the more destructive possibilities of such an attack would essentially be an act of war and not many cybercriminals want to earn the attention – or the ire – of a nation state.

The theory of the black swan – theorized by Nassim Nicholas Taleb:  a situation that is hard to predict and seems wildly unlikely, but has apocalyptic implications – fits perfectly here. We don’t know when, how or if such an event might happen but we had better start preparing for it. Even if the likelihood of such an event is small, the cost of waiting and not preparing for it will be much higher. The IoT market, particularly in the utilities sector need to start preparing for that black swan.

Public Key Infrastructures (PKI) using certificates will allow utilities to overcome many of these threats, providing unparalleled trust for an often hard to manage network. It’s been built on interoperable and standardized protocols, which have been protecting web-connected systems for decades. It offers the same for the IoT.

PKIs are highly scalable, making them a great fit for industrial environments and utilities. The manner in which many utilities will be seizing hold of the IoT is through the millions of sensors that will feed data back to operators and streamline day-to-day operations, making utilities more efficient. The sheer number of those connections and the richness of the data flowing through them make them hard to manage, hard to monitor and hard to secure.

A PKI ecosystem can secure the connections between devices, the systems and those that use them. The same goes for older systems, which have been designed for availability and convenience, but not for the possibility of attack. Users, devices and systems will also be able to mutually authenticate between each other, ensuring that behind each side of a transaction is a trusted party.

The data that is constantly travelling back and forth over those networks is encrypted under PKI using the latest cryptography. Attackers that want to steal that data will find that their ill-gotten gains are useless when they realize they can’t decrypt it.

Further ensuring the integrity of that data is code signing. When devices need to update over the air, code signing lets you know that the author of the updates is who they say they are and that their code hasn’t been insecurely tampered with since they wrote it. Secure boot will also prevent unauthorized code from loading when a device starts up. PKI will only allow secure, trusted code to run on a device, hamstringing hackers and ensuring the data integrity that utilities require.

The possibilities of an attack on a utility can sometimes seem beyond the pale. Just a few years ago a hack on a power grid seemed almost impossible. Today, news of IoT vulnerabilities regularly fills headlines around the world. The full destructive implications of this new situation have yet to be fully realized, but just because all we see are white swans, it doesn’t mean a black one isn’t on its way.

Users will soon start demanding these security provisions from companies. The Federal Energy Regulatory Commission (FERC) has recently fined a utility company that was found guilty of 127 different security violations $10 million. The company wasn’t named, but pressure groups have recently mounted a campaign, filing a petition with FERC to publicly name and shame it. Moreover, with the advent of the General Data Protection Regulation and the NIS directive last year, utilities now have to look a lot closer at the way they protect their data. All over the world, governments are looking at how to secure the IoT, especially when it comes to the physical safety risks involved. Utilities security matters because utilities hold a critical role in the functioning of society. It is just as important that they be dragged into the 21st century, as they are protected from it. PKIs can offer a way to do just that.

Mike Ahmadi, DigiCert VP of Industrial IoT Security, works closely with automotive, industrial control and healthcare industry standards bodies, leading device manufacturers and enterprises to advance cybersecurity best practices and solutions to protecting against evolving threats.

This article on the publication of Mike Ahmadi, is from an article of Intersec website.

Can the DNS have an impact on the SEO?

Can the DNS have an impact on the SEO?
Image source : geralt via Pixabay

This is a recurrent question from our customers: does the use of the DNS, whether it is good or bad, have an impact on the websites’ SEO? We have already discussed about the impact of a HTTPS website on the SEO, this is now the occasion to focus on the side of the DNS.

The DNS is an invisible process, implemented in the background, it’s difficult to comprehend why it can help or affect a website’s performance and the ranking in search engines, more particularly on Google.

This article will approach the possible impact of the DNS in response to the following questions:

  • Does the modification of a DNS record affect the SEO?
  • Does the change of the DNS provider affect the SEO?
  • Which part of the DNS plays in a website’s migration?
  • Does the change of a website’s IP address affect the website’s SEO?
  • Quid of the DNSSEC implementation?
  • Can a DNS breakdown affect the SEO?
  • Can a faster DNS increase the SEO?

Does the change at the DNS level affect the SEO?

1. Modification of a DNS record, be careful of the TTL

The domain name’s redirection towards the corresponding web server often passes through the creation of a A type record (IPv4 address). The A record will then direct the traffic towards the IP address of the destination web server. The modification of this record can lead to performance problems.

Indeed, to optimize the response time, the DNS system allows the  information caching with the DNS resolver servers for a given time, the duration of the TTL (Time to live) defined by the technical manager of the domain name, during its configuration. The usual TTL, like the one recommended by ANSSI, is several hours for the usual uses of domain names (websites). In the case of a A record modification, this one could be taken into account only at the end of the TTL. Then web users could still access to the former record configurations for a few minutes or even several hours after the modifications.

Thus it’s important to reduce the TTL, even temporarily during these modifications.

But does that affect the SEO? Yes, it does and no, it doesn’t. In the case of users being sent towards a destination that no longer exists, Google will consider this as a 404 error. Beyond the negative user experience, this is not directly a SEO factor. However be careful of the possible existence of backlinks and the too high numbers of 404 errors. A low TTL allows to limit the impact during these modifications.

2. Modification of the DNS declared for a domain name

A domain name is associated to the name servers (NS/Name Servers) which allow the right DNS resolution. The DNS service searches the information on these NS. These NS can be modified during the change of the provider managing the domain name, or simply to pass from a DNS infrastructure to another. Will the change of the name server affect the SEO?

Depending on the provider and the chosen infrastructure, the resolution time could be more or less short with a possible impact of improvement or decrease regarding the SERP (Search Engine Result Page). Indeed, the resolution time is taken into account by Google (see after).

And like for a record change, it is recommended to reduce the lifespan of the records before modifying the name servers, so the DNS resolvers don’t keep in cache the former information.

3. Risk associated to the DNS during the website’s migration

This is the same principle discussed previously. The modifications of the DNS configurations don’t directly affect the SEO, but can lead to a bad user’s experience. The TTL should also be seen as a useful mean to take into consideration.

Which specific cases to consider?

  • Change of web hosting provider
  • Change of DNS hosting provider?
  • Move the traffic of www. towards a “nude domain” (without www.)
  • Move your domain towards a CDN (content diffusion network)

4. Change of the destination IP address

No. During the modification of a record pointing from a termination point to another, the SEO is not affected. The only (very rare) exception to this rule would be to point a domain towards a termination point that would have been already identified as a spam server (for example, the IP address of a shared server).

However, be careful of the IP address in question, one of the (many) rules of Google’s SEO is that an IP address used for a website should be located near the final user.

5. DNSSEC implementation

DNSSEC allows to authenticate the DNS resolution through a chain of trust between the different DNS servers of this resolution. Just like for the HTTPS, this is an additional security layer to implement. And like for the HTTPS, the pages’ loading time is affected, and therefore potentially the associated SEO. To put this into perspective, DNSSEC is essential to web users’ surfing and it is recommended to implement it.  Most companies that propose security audit regarding domain names consider DNSSEC as necessary and then as a notation criteria.

Do faster DNS increase the SEO?

Google admitted that the loading time of a web page has an impact on the SERP results. The times of the DNS research are in general less than a second, they can nevertheless affect the loading of a webpage in the following cases:

1. Recurring breakdowns on the DNS infrastructure

When a DNS cannot resolve or takes more time than usual, it can add many seconds to the time of a page loading. In case of lack of reliability and recurring unavailability, the impact on SEO is proved… Not mentioning the user experience in front of these repetitive failures (increase of the bounce rate, decrease of customers’ retention and impact on the trust in the brand, if not revenue loss). It is important to rely on a reliable and trustworthy infrastructure.

2. Quality of the network and points of presence

This is purely and simply physics, the nearest a names server is to the final user, the less time is needed to respond to its request. The DNS networks called “anycast” (optimized addressing and routing towards “the nearest” or the “more efficient” server) with many points of presence in the world, allow to optimize the response time depending on the geographical location.

Another important point is to have at least three names servers that are authority (SOA) for a domain name, ideally based on different domain names and TLDs, in order to reduce the risk of SPOF (Single Point of Failure) of an infrastructure. Indeed, if an infrastructure relies on the same domain name, an unavailability of this domain name, for whatever the reason, leads to the unavailability of the DNS infrastructure. Likewise, at the TLDs’ level and even if it is less likely, a problem of registry availability would affect all the DNS infrastructure.

3. Be careful of “extended” DNS configurations

It’s not unusual to have DNS configurations which send towards a final destination through several steps like in the example below.  As a consequence, the resolution time is affected and potentially, the performance in terms of SEO.

fr.wikipedia.org. IN CNAME text.wikimedia.org.

text.wikimedia.org. IN CNAME text.esams.wikimedia.org.

text.esams.wikimedia.org. IN A 91.198.174.232

Conclusion

The SEO is a science to consider as a whole. Thus, as we have seen through the impact of the HTTPS adoption of a website, this is a referencing factor among others and all things being equal, then this is particularly important in order to achieve a competitive edge on the first page of results.

The same applies to the impact of DNS on the SEO. Can the DNS have an impact? Yes, it clearly can in the case of incorrect configurations, or in the case that the DNS infrastructures do not allow response times fast enough. A DNS infrastructure called anycast is essential for any domain name carrying an important web traffic, even more at an international level. This is a data to integrate in a whole, and this thinking should be in a global approach of the SEO with the web marketing team.

Cyberattacks, the companies more and more efficient

Cyber resilience- Cyberattacks, the companies more and more efficient
Image source : VISHNU_KV via pixabay

Last September, Accenture published the research “Gaining Ground On the Cyber Attacker 2018 State of Cyber Resilience” and highlighted the doubling of the cyberattacks number suffered by the companies (232 on average in 2018 versus 106 in 2017 at international level), but also the improvement of the companies’ ability to identify and counter these attacks.

The attacks number has more than doubled between 2017 and 2018…

This research deserves attention as it differentiates from many very alarmist reports. If everything is not perfect, in particular due to the ingenuity and increasing complexity of the attacks, the companies continue to improve their defense capacity, were able to strengthen their cyber resilience and stood efficient despite the threats. The companies are able to defend themselves better, particularly by detecting the attacks much earlier.

… But where a third of the attacks were successful in 2017, the part of successful attacks decreased to 1 on 8 (12,5%) in 2018.

A report that blows hot and cold

Security teams have made great progress but there is still more work to be done. Companies now prevent 87% of all targeted attacks, but are still facing two to three security breaches per month on average.

Companies might be cyber resilient in two to three years, but the pressure and the threats’ complexity increase every day. If 90% of the respondents expect the investment in cybersecurity to increase in the next 3 years, only 31% think that it will be sufficient.

The new technologies are essential, but the investments are lagging behind. If 83% of the respondents agreed that new technologies are essential, only two out of five are investing in AI, machine learning and automation technologies.

Confidence around cybersecurity measures remains high, but a more proactive approach of the cybersecurity is needed. If more than 80% of the respondents are confident in their capacity to monitor breaches, on the other hand 71% said that cyberattacks are still a bit of a black box, they do not know how or when they will affect their organization.

The board of directors and management are more engaged with cybersecurity. 27% of cybersecurity budgets are authorized by the board of directors and 32% by the CEO. The role and responsibilities of the CISO must improve towards more transversality in the company.

5 steps to cyber resilience

Accenture highlights five steps to optimize the companies’ defense and move towards the ultimate aim of cyber resilience in a world that continues to change towards new threats territories (artificial intelligence, omnipresence of the cloud, social networks, smartphones, internet of things) for more and more complex threats difficult to counter and a need becoming strategic: the data protection.

  • Build a strong foundation by identifying high value assets, in order to better protect them including from internal risks. It is necessary to ensure that controls are implemented throughout the value chain of the company.
  • Test the IT security by training cybersecurity teams to the best hackers’ techniques. The role plays staging an attack and defense team with training coaches can allow to bring out the improvement points.
  • Employ new technologies. For a company, it is recommended to invest in technologies able to automate cyber defense and in particular to use the new generation of identity management which relies on multi-factor authentication and the user behavior monitoring.
  • Be proactive and anticipate threats by developing a strategic team (“threat intelligence”) in charge of evolving an intelligent security operation center (SOC) relying on a collect and mass analysis of the data (“data-driven approach”).
  • Evolve the role of the CISO (Chief information security officer). The CISO is closer to professions, they find the right balance between security and risk taking and they communicate more and more with the executive management, which now holds 59% of the security budget versus 33% a year ago.

Conclusion

The Accenture study highlights a real growing awareness on cyber threats by companies, and the implementation of investment to better protect themselves. The race is now launched to tend to cyber resilience, between more and more organized attackers and more and more sophisticated defense system. See you at the end of the year to make an assessment of the forces involved.

Google makes HTTPS encryption mandatory for its 45 new TLDs : .dev / .app / .how…

Google makes HTTPS encryption mandatory for its 45 new TLDs - HSTS
Source : Sean MacEntee via flickr

In a recent article in this blog, we mentioned the arrival of Chrome 68 in July 2018 and the fact that HTTP will be considered “not secure” from then on. Well, this is not the only weapon that Google is planning to use to encourage large-scale adoption of encrypted websites.

You may not be aware, but Google has submitted a number of applications to ICANN as part of the new TLD program, and as a registry, they have secured the management of 45 top-level domains*. Just as the .bank and .insurance extensions have very strict security rules, Google has announced that they will apply HSTS implementation and pre-loading to their new TLDs therefore making HTTPS implementation mandatory.

What is HSTS?

HTTPS Strict Transport Security (HSTS) is a way in which browsers automatically enforce HTTPS-secured connections instead of unsafe HTTP. For example, if the website http://www.nameshield.net is on the list, a browser will never make insecure connections to the website, it will always be redirected to a URL that uses HTTPS and the site will be added to its list of sites that must always be accessed through HTTPS. From thereon, the browser will always use HTTPS for this site, whatever happens, whether the user has accessed the site via a Favorite, a link or simply by typing HTTP in the address bar, he has nothing more to do.

HSTS was first adopted by Chrome 4 in 2009, and has since been integrated in to all major browsers. The only flaw in the process is that browsers can still reach an unsafe HTTP URL the first time they connect to a site, opening a small window for attackers to intercept and carry out such attacks as Man-in-The-Middle attacks, misappropriation of cookies or the Poodle SSLv3 attack which was very much in the news in 2014.

A fully secured Top-Level Domain

HSTS pre-loading solves all this by pre-loading a list of HSTS domains into the browser itself, eliminating the threat of attacks. Even better, this pre-loading can be applied to entire TLDs, not just domains and subdomains, which means that it becomes automatic for anyone who registers a domain name ending in that TLD.

Adding an entire TLD to the HSTS pre-upload list is also more efficient because it secures all domains under this TLD without having to include all of the domains individually. Since HSTS pre-load lists can take months to update in browsers, TLD setup has the added benefit of making HSTS instant for newer websites that use them.

HTTPS deployment will be obligatory for .app and .dev extensions

Google is therefore planning to make HSTS mandatory for its 45 TLDs in the coming months. What does that mean? Millions of new sites registered under each TLD will now be HTTPS (and domain owners will need to configure their websites to switch to HTTPS or they will not work). In order to use a .dev, .app, .ads, .here, .meme, .ing, .rsvp, .fly … domain name, you will need to acquire an SSL certificate and deploy HTTPS.

Our team is at your disposal for any questions related to TLDs, domain names or SSL certificates.

* Google’s 45 TLDs: .gle .prod .docs .cal .soy .how .chrome .ads .mov .youtube .channel .nexus .goog .boo .dad .drive .hangout .new .eat .app .moto .ing .meme .here .zip .guge .car .foo .day .dev .play .gmail .fly .gbiz .rsvp .android .map .page .google .dclk .search .prof .phd .esq .みんな .谷歌 .グーグル

SSL certificates reduction to 2 years maximum

SSL certificates reduction to 2 years maximum

The CAB forum, organization which defines the SSL certificates issuing and management rules approved the SSL certificates reduction to a duration of 2 years against 3 previously. Initiated by the browsers Chrome and Mozilla heading, this decision moves in the direction of an always more secured Internet by forcing the actors to renew more often their security keys and to stay on the last standards of the market.

This decision will be applicable to all Certification Authorities from March 1st 2018. In order to ensure a smooth transition, from February 1st 2018, Nameshield will not propose certificates with a 3 years duration anymore.

What impact for your certificates?

The new certificates will thus have a maximum duration of 825 days (2 years and 3 months to cover the possibility of 90 days early renewal). EV certificates were already under this scenario, so are concerned the DV and OV certificates in all their forms (standard, multi-sites or wildcard). Nothing in particular for these certificates.

For existing certificates, this new duration will have a consequence, since it will apply to all the certificates from March 1st. A 3 years certificate issued recently and which would need to be replaced beyond the 825 days deadline, will then have to be authenticated again. It is then important to know it to prevent urgent reissue, including for the simple SAN adding. You have to check beforehand if the certificate to replace may be impacted, this is the case of DV and OV certificates, the EV are also not concerned here.

Nameshield’s SSL team will inform you regarding the concerned certificates.

The CAA becomes mandatory in the small SSL’s world

Or how to benefit from it to implement a certification strategy specific to your company?

The CAA becomes mandatory in the small SSL’s world

In January 2013, a new type of DNS Resource Record has appeared to improve the control chain in the SSL certificates issuing. This record, called CAA for Certificate Authority Authorization, allows to specify for a given domain name which Certification Authorities are authorized to issue certificates.

It’s an extremely interesting creation, in particular for big companies and groups, which technical teams are scattered in the World and for which it’s often difficult to require a global certification strategy. It’s not unusual for companies to accidentally discover the existence of certificates requested by teams not knowing the processes, by external consultants, issued by Certification Authorities with a bad image, or for certificates of low level of authentication (DV). The implementation of CAA record on your domain names is a good solution to control what the teams are doing and the news on SSL’s world will help you do that.

Indeed, if the CAA has been detailed in the RFC-6844 from 2013, it was not mandatory until today, for a Certification Authority to check if it was authorized or not to issue a certificate on a given domain name, hence a certain uselessness of this and a very low adoption.

September 8th, 2017 – The CAA checking becomes mandatory

We had to wait until March 2017, and a positive vote of the CAB/forum (ballot 187) to make this verification mandatory. Since the 8 September, the Certification Authorities have the duty to do this verification at the risk of sanctions from CAB/forum and browsers, the recent news regarding Google and Symantec has shown us how it’s not in their interests.

Three scenarios occur during this verification on a given domain name:

  • A CAA record is set and indicates the Certification Authority name, this one can issue the certificate.
  • A CAA record is set and indicates a Certification Authority’s name different, this one CANNOT issue the certificate.
  • No CAA record is set, any Certification Authority can issue a SSL certificate.

The CAA becomes mandatory in the small SSL’s world

It’s important to note that for a given domain name, many CAA records can be declared. A simple tool (among many others) to test your domain name, is available online: https://caatest.co.uk/

How to benefit from CAA for my company?

If it’s not already done, the establishment of the CAA checking is the opportunity for your company to define a certification strategy and to be able to ensure that it is complied with.

Define one (or multiple) Certification Authority corresponding to your values and to your expectations in term of service quality is a first step.

It will require to put around the table the marketing stakeholders to validate the impact on websites display and the technical services to ensure of the chosen provider’s quality. It will then be necessary to declare these CAA records in the different zones of your domain names.

It’s then important to communicate with all the operational staff so they become aware of the rules imposed within the company, in order not to block them in obtaining a certificate.

Indeed, Nameshield’s experience shows that SSL certificates are often requested in a hurry; moreover the browser’s last versions are not kind towards certificates’ errors by ostensibly displaying “not secure”. In consequence, blocking the issuing of a certificate because the communication didn’t get through can be damaging.

Such strategy presents real advantages in the control of certificates, in marketing, technical, risks control and costs associated to certificates. It’s necessary to conduct it with full knowledge and in order to do it, our SSL experts’ team can assist you.

The 3 most common DNS attacks and how to defeat them

The 3 most common DNS attacks and how to defeat them

In October 2016, many popular websites like Amazon, Twitter, Netflix and Spotify have become unavailable to millions web users in the United Sates, during almost 10 hours, i.e. an eternity. The cause, one of the most powerful attacks of Internet history on Dyn’s DNS services, a major actor in this sector.

Other companies like Google, The New York Times and many banks have also been the victims of different kinds of attacks aiming at the DNS, the last few years, and if in many companies, the DNS stays forgotten, things are evolving towards awareness forced by these many attacks.

Attack #1: DNS cache poisoning and spoofing

The aim of DNS poisoning is to take web users towards a scam website. For example, a user enters gmail.com in their web browser with the objective to consult their mailbox. The DNS having been poisoned, it’s not the gmail.com page which is displayed but a scam page chosen by the criminal, in order, for example, to retrieve the email box accesses. The users entering the correct domain name, will not see that the website they’re visiting is not the right one but a scam one.

It creates a perfect opportunity for the cybercriminals to use phishing methods in order to steal information, either identification information or credit card information from unsuspicious victims. The attack can be destructive, depending on many factors, the attacker’s intention and the DNS poisoning impact.

How are the hackers making their strike? By exploiting the DNS cache system.

The DNS cache is used in all the web to accelerate the time charging and reduce the charges on DNS servers. The cache of a web document (web page, images) is used to reduce bandwidth consumption, the web server charge (tasks it carries out) or to improve the consultation speed of the browser use. A web cache keeps documents copies transiting through its way. Once a system requests to the DNS server and receives an answer, it records information in a local cache for a faster reference, in a given time, without having to search the information. The cache can answer to past requests based on its copies, without using the original web server.

This approach is used around the web in a regular way and in chain. The DNS server records are used to cache records on another DNS. This server is used to cache DNS records on network systems like rooters. These records are used to create caches on local machines.

DNS poisoning arrives when one of its caches is compromise.

For example, if a cache on a network rooter is compromised, then anyone who uses it can be misdirected towards a fraudulent website. The false records of DNS is branched to the DNS caches on the machine of each user.

This attack can also target the high links of the chain. For example, a major DNS server can be compromised. It can damage DNS servers’ caches managed by the Internet services providers. The “poison” can impact on the systems and peripheral networking of their customers, which allows to forward millions of persons towards fraudulent websites.

Does it seem crazy to you? In 2010, many American web users couldn’t access websites like Facebook and YouTube, because a DNS server of a high level internet services provider has accidently retrieved the records of the Chinese big firewall (Chinese Government blocked the accesses to these websites).

The antidote to this poison

The DNS cache poisoning is very difficult to detect. It can last until the TTL (time to live – validity time of a request in cache) expires on the cache data or an administrator realizes it and resolves the problem. Depending on the TTL duration, servers can take some days before resolving the problem by themselves.

The best methods to prevent an attack by DNS cache poisoning include the regular update of the program, the reduction of TTL times and the regular suppression of DNS caches of local machines and network systems.

For the registries that allow it, the implement of DNSSEC is the best solution in order to sign domain names’ zones on all the chain and make impossible a cache poisoning attack.

Attack #2: Attack by DNS amplification (of DDoS type)

Attacks by DNS amplification are not threats against DNS systems. Instead of this, they exploit the open nature of DNS services to reinforce the power of the attacks by distributed denial of services (DDoS). These attacks aren’t the lesser known, targeting for example well known websites like BBC, Microsoft, Sony…

Hold on and amplify

DDoS attacks generally occur with the help of a botnet. The attacker uses a network of computers infected by malwares to send mass traffic towards the target, like a server. The purpose is to surcharge the target and slow it or crash it.

Attacks by amplification add more power. Instead of directly sending traffic from a botnet to a victim, the botnet sends requests to other systems. These systems answer by sending more important traffic volume to the victim.

Attacks by DNS amplification are the perfect examples. The attackers use a botnet to send thousands of search requests to open DNS servers. The requests have a fake source address and are set up to maximize data quantity sent back by each DNS server.

The result: an attacker sends relatively restrained quantities of traffic from a botnet and generates traffic volumes proportionally superior or “amplified” of DNS servers. The amplified traffic is directed towards a victim which causes the system’s breakdown.

Detect and defend ourselves

Some firewalls can be set up to recognize and stop the DDoS attacks as they occur by deleting artificial packages trying to flood the systems on the network.

Another way to fight against these DDoS attacks consists in hosting your architecture on many servers. This way, if a server is surcharged, another one will always be available. If the attack is weak, the IP addresses of traffic sending can be blocked. Furthermore, a rise of the server’s bandwidth can allow it to absorb an attack.

Many dedicated solutions also exist, conceived exclusively to fight against DDoS attacks.

Attack #3: DDoS attack on DNS

DDoS attacks can be used against many systems types. It includes the DNS server. A successful DDoS attack against DNS server can cause a breakdown, which makes the users unable to surf the web. (Note: users are susceptible to continue to reach websites they have recently visited, by supposing that the DNS record is registered in a local cache.)

This is what happened to Dyn’s DNS services, as described at the beginning of this article. The DDoS attack has surcharged the DNS infrastructures that prevents millions of persons to access principal websites which domain names were hosted on.

How to defend yourself against these attacks? It all depends on your DNS configuration.

For example, do you host your DNS server? In this case, there exist measures that you can take to protect it, by updating the last patches and by only allowing local computers to access it.

Are you perhaps trying to reach the attacked DNS server? In this case, it will probably be hard for you to connect. That’s why, it’s wise to set up your systems to rely on more than one DNS server. This way, if the principal server doesn’t answer anymore, a backup server will be available.

Predict and reduce the attacks

DNS server attacks are a major risk of security for the network and have to be taken seriously. Companies, hosts and Internet services providers, implement backup measures to prevent and reduce the effects of this kind of attacks when they are the victims.

Following these attacks, ICANN has highlighted more strongly than ever the necessity to use the DNSSEC protocol to sign each DNS request with a certified signature, by ensuring that way the authenticity. This technology’s disadvantage is that it has to be implemented at every stages of DNS protocol in order to operate properly – which arrives slowly but surely.

Opt for hosted infrastructures and maintained by DNS experts. Make sure that the network is anycast (multiple points of presence distributed around the world or at least on your influence zones), beneficiates of anti-DDoS filter and offers you supplementary security solutions like DNSSEC but also failover, to integrate the DNS in your PCA and PRA.

Nameshield has its own DNS Premium infrastructure to answer to their customers’ needs. This infrastructure answers in particular to (even exceeds) all ANSSI prerequisites. The DNS Premium solution is integrated in the scope of our ISO 27001 certification.

Don’t hesitate to contact us for all questions regarding cyberattacks.