Home Blog Page 849

JavaScript Most Popular Language: Stack Overflow Report

According to the latest Stack Overflow developer survey, JavaScript is the most popular programming language and Rust is most loved.

Stack Overflow, the popular question-and-answer community site for developers, today released the results of its annual developer survey, which indicates, among other things, that JavaScript is the most popular programming language among respondents. More than 50,000 developers—56,033 to be exact—in 173 countries around the world responded to the survey.

While the 2016 Stack Overflow survey only reached .4 percent of the estimated 15 million developers worldwide, a large majority of respondents (85.3 percent of full-stack developers) cited JavaScript as the programming language they most commonly use. Meanwhile, 32.2 percent of respondents cited Angular as the most important technology to them and 27.1 cited Node.js—giving JavaScript and JavaScript based technologies three of the top 10 slots among the most popular technologies used by developers. Angular was number five and Node.js came in at number eight.

Read more at eWeek

42 Best Free Linux Email Software

Email is arguably one of the most popular and useful functions of a Linux system. Fortunately, there is a wide selection of free email software available on the Linux platform which is stable, feature laden, and ideal for personal and business environments. Send and receive emails, run a mail server, filter spam, administer a mailing list are just some of the options explored in this article.

To provide an insight into the quality of software that is available, we have compiled a list of 42 high quality Linux email applications, covering a broad spectrum of uses. There’s a mix of desktop and server based applications included. Hopefully, there will be something of interest for all types of users.

<A HREF=”http://www.linuxlinks.com/article/20160318025009703/EmailApps.html“>Full article</A>

Check if a machine runs on 64 bit or 32 bit Processor/Linux OS?

Q. How can I check if my server is running on a 64 bit processor and running 64 bit OS or 32 bit operating system?

Before answering above question we have to understand below points.

We can run a 32-bit Operating system on a 64 bit processor/system.
We can run a 64-bit operating system on a 64 bit processor/system.
We cannot run a 64-bit operating system on a 32-bit processor/system.
We can run a 32-bit operating system on a 32-bit processor/system.

Once we are clear about the above 4 points then we can see if our machine have a 64 bit processor or not.

How to check if my CPU is a 64-bit supported processor or not?

There are two commands to check if it’s a 64 bit processor or not

Option 1 : use lscpu command to check if it supports multiple CPU operation modes(either 16, 32 or 64 bit mode).

as a normal/root user execute below command

lscpu | grep op-mode

Sample output on a 64-bit processor

This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 :~$ lscpu | grep op-mode
CPU op-mode(s): 32-bit, 64-bit

Sample output on a 32-bit processor

[surendra@www ~]$ lscpu | grep op-modeCPU op-mode(s): 32-bit

If you observe the first output will say that your CPU supports both 32-bit as well as 64 bit operating systems. This indicates that it’s 64-bit processor from our above 4 rules. But in the second machine it say’s only 32-bit CPU mode which indicates its a 32 bit processor.

Option 2: Use proc file system file CPU info to get the processor type.

grep -w lm /proc/cpuinfo

Sample output on a 64-bit processor when searching for lm(long mode) bit is set or not

grep -w lm /proc/cpuinfoflags : tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr

Sample output on a 32-bit processor when search for lm(long mode) bit.

grep -w lm /proc/cpuinfo

[surendra@www ~]$ grep lm /proc/cpuinfo

If you don’t get any output that indicates its not a 64 bit processor.

How to check if my Operating system is a 64-bit or 32-bit?

Before knowing about this you should know about i386, i486 etc naming convention.

What is difference between i386, i486, i586, and i686 OS?

Read Full post:   http://www.linuxnix.com/find-determine-if-a-machine-runs-64-bit-linux-os-or-not/

 

 

 

Manjaro LXQt 16.03 – Video Overview and Screenshot tours

Manjaro LXQt 16.03 has been released and announced by the developers of the Manjaro community. this release shipped with the latest build of the next-generation LXQt desktop environment (LXQt 0.10) and powered by the long-term supported Linux kernel 4.4.x. Manjaro LXQt 16.03 uses the same build of the Manjaro Linux which is already in use in the Manjaro Linux Cinnamon 16.02 and the Manjaro Linux Deepin 16.03.

More Details : Manjaro LXQt 16.03 – Video Overview and Screenshot tours

How to install TYPO3 7 with Nginx and MariaDB on Debian 8 (Jessie)

This tutorial shows how to install and configure a TYPO3 (version 7) web site on a Debian 8 (Jessie) server that has Nginx installed as web server and MariaDB as the database server. Typo3 is an enterprise class CMS system written in PHP which has a large user and developer community.

Read more at HowtoForge

Node.js Buffer API Changes

By James M Snell.  I recently landed Node.js Pull Request https://github.com/nodejs/node/pull/4682 into the master branch. It’s important to understand what it does and what changes are in store for the upcoming Node.js v6 release.

The Node.js Buffer API is one of the most extensively used APIs in Node. There is, however, a challenge. About three months ago, a discussion started around ambiguities and usability issues that exist when using the “Buffer()” method to create new Buffer instances. Take the following example, for instance:

const jsonString = getJsonStringSomehow();
const myBuffer = Buffer(JSON.parse(jsonString));

Many developers may or may not be familiar with the fact that it is possible to generate a JSON representation of a Buffer instance, pass that around, and create a new Buffer from that JSON string. For instance, given the following Buffer instance:

Buffer([1,2,3])

The JSON representation is:

{“type”:”Buffer”,”data”:[1,2,3]}

The example code above essentially takes that JSON string, parses it, and passes it off to the Buffer constructor to create the new Buffer. Easy. But there’s a problem. What happens if the jsonString happens to simply be a number instead of the actual JSON representation of the Buffer?

const jsonString = '100';
const myBuffer = Buffer(JSON.parse(jsonString));

What many developers do not realize is the fact that passing this jsonString to JSON.parse(jsonString) will successfully parse the input as a JSON Number. Passing this number into the Buffer constructor will cause the Buffer to be created by allocating new, uninitialized memory. This uninitialized memory can contain potentially sensitive data that can end up being leaked if not handled appropriately.

While the behavior of the Buffer constructor has been well documented for quite some time, the fact that the Buffer constructor implements significantly different behavior based on what kinds of values are passed into it represents a fundamental API usability issue that if not understood can lead a developer to inadvertently introduce significant bugs and vulnerabilities into their applications. To fix this https://github.com/nodejs/node/pull/4682 introduces a number of new constructor methods used to create Buffer instances. The existing Buffer() constructor will continue to work, but starting with the upcoming release of Node.js v6, it is recommended that all developers begin migrating their code to use the new constructors.

// Allocate *initialized* memory. It will be zero-filled by default.
// The optional fill and encoding parameters can be used to specify
// an alternate fill value.
Buffer.alloc(size[, fill[, encoding[[)
// Allocate *uninitialized* memory
Buffer.allocUnsafe(size)
// Create a Buffer from a String, Array, Buffer, or ArrayBuffer
Buffer.from(str[, encoding])
Buffer.from(array)
Buffer.from(buffer)
Buffer.from(arrayBuffer[, offset[, length[[)

The new Buffer.allocUnsafe(size) method is the direct replacement for the existing Buffer(size) constructor where size is the number of bytes to allocate for the newly created Buffer. It is important to understand that the memory allocated by this method is uninitialized and must be overwritten completely in order to avoid accidentally leaking data when the Buffer is read out.

The new Buffer.alloc(size[, fill[, encoding[[) method, on the other hand always allocates Buffer instances with initialized memory. If the fill parameter is left undefined, the instance will be zero-filled. If the fill parameter is provided, the new Buffer is filled automatically. This is generally the equivalent to the existing Buffer(size).fill(val) pattern.

The various Buffer.from() methods are the direct replacement to the equivalent Buffer(str[, encoding]), Buffer(array), Buffer(buffer), and Buffer(arrayBuffer) constructors. The key difference with Buffer.from() is that an error will be thrown in the first argument passed is a Number.

Another significant addition coming in v6 is the introduction of the zero-fill-buffers command-line flag. Setting this flag when launching Node will force all Buffers created using Buffer.allocUnsafe(), Buffer(size), and SlowBuffer(size) to be zero-filled by default, overriding the existing default behavior.

$ node
> Buffer.allocUnsafe(10)
<Buffer 50 04 80 02 01 00 00 00 0a 00>
$ node --zero-fill-buffers
> Buffer.allocUnsafe(10)
<Buffer 00 00 00 00 00 00 00 00 00 00>

Using this new command line flag, developers can continue to safely use older modules that have not yet been updated to use the new constructor APIs and that may not be currently properly validating the input to the Buffer() constructor.

With these changes, you may be wondering what is going to happen to the existing Buffer() constructor. The answer is simple: nothing much. This pull request adds a note to the documentation that the existing Buffer() constructors have been deprecated however the constructor will continue to operate without any changes. In Node.js core terms we call this a “soft deprecation” or “docs only deprecation”. Existing code that uses the Buffer() constructor will continue to operate as it has before.

It must be noted, however, that an additional change is being considered for the Buffer(size) constructor. Currently (and historically), Buffer(size) has always allocated uninitialized memory. The additional change being considered is to switch that so that Buffer(size) will allocate initialized memory by default. If this change is made, it will have to be backported to all Node.js release streams (v5, v4, v0.12 and v0.10). The decision to make this change is still being discussed. For now, however, Buffer(size) continues to operate as it always has.

snellJames M Snell is IBM Technical Lead for Node.js

This article originally posted on Medium.

ProtonMail’s Encrypted Email Service Exits Beta, Adds iOS, Android Apps

protonmail-55-pmAs Apple continues to battle the US Government’s desire to work around the security of its mobile operating system, European encrypted email startup,ProtonMail, is choosing the latest skirmish in the crypto wars to launch its end-to-end encrypted email service out of beta — switching from invite-only to public sign ups today.

It’s also launching its first native iOS and Android apps. Previously the free encrypted email client has been accessible via a web interface. The company has open sourced its web interface to bolster trust in its end-to-end encryption. The new mobile apps will also be open source in time.

Read more at TechCrunch

IoT Bringing Changes to Company Networks, Big Data Projects

The Internet of Things is big, and only getting bigger. A new report from Ovum suggestions now is the time IT needs to get networks ready to collect all this data, and use it to business advantage.

If 20 billion devices are projected to be connected to the Internet by 2020, the volume of data available will be on a scale never experienced before. That will open opportunities to exploit data and gain insights that IT and business managers haven’t had access to previously. Among other things, the Internet of Things is going to challenge current data capture, storage, and analysis practices. It will move data analysis out toward the edge of the network, closer to where the data is being generated. It is also likely to require new types of database technology.

Read more at Information Week

VMware Fixes XSS Flaws in vRealize for Linux

VMware patched two cross-site scripting issues in several editions of its vRealize cloud software. These flaws could be exploited in stored XSS attacks and could result in the user’s workstation being compromised.

The input validation error exists in Linux versions of VMware vRealize Automation 6.x prior to 6.2.4 and vRealize Business Advanced and Enterprise 8.x prior to 8.2.5, VMware said in the advisory (VMSA-2016-0003). Linux users running affected versions should update to vRealize Automation 6.2.4 and vRealize Business Advanced and Enterprise 8.2.5 to address the problems.

Read more at IT World

Crate Built a Distributed SQL Database System To Run Within Containers

Crate-01Crate Technology has designed a database system for supporting Docker containers and microservices. The technology stresses ease of use, speed and scalability while retaining the ability to use SQL against very large data sets.

Crate was built to run in ephemeral environments, said Christian Lutz, Crate CEO. It was the ninth official Docker image in the Docker Registry and has been downloaded more than 350,000 times in the past six months. It can be managed with Docker tools, or with Kubernetes or Mesos.

“We believe that microservices architecture will definitely win,” Lutz said. “What you do with application containers already in production will very quickly create a requirement for databases [in containers] as well. If you don’t have the architecture for that, you can’t run in a scalable way in containers. You can take a database that doesn’t work in a container — obviously, SQL works in a container — but you have one node and one volume, but you can’t add hundreds of nodes. That’s why I think it will be very important when people start to put the database into containers.”

Read more at The New Stack