Home Blog Page 408

IBM Index: A Community Event for Open Source Developers

The first-ever INDEX community event, happening now  in San Francisco, is an open developer conference featuring sessions on topics including artificial intelligence, machine learning, analytics, cloud native, containers, APIs, languages, and more.

The event will also feature a keynote presentation from The Linux Foundation’s executive director, Jim Zemlin, who will discuss building sustainable open source projects to advance the next generation of modern computing.

Angel Diaz, VP of Developer Advocacy and OSS at IBM

In this article, we talk with Angel Diaz, VP of Developer Advocacy and OSS at IBM, who explains more about what to look forward to at this event.

It looks like there’s a heavy Open Source flavor in the Index conference lineup. Tell us more.

Angel Diaz: Absolutely. There are 26 different developer sessions related to Open Source at Index — covering everything from Building Cloud Native Applications Best Practices, to Open Main Frame, and everything in between.

Open Source is the reality of the enterprise stack today. Open Source has brought compute, storage and network together in OpenStack. It’s brought unity around 12- factor applications in Cloud Foundry. It’s brought the world together around microservices via the Cloud Native Computing Foundation, Docker and Kubernetes. And it’s brought the industry together in serverless around projects like Open Whisk. When you look at data, we’ve been democratizing to the masses around Apache Spark and Data Science — and in AI, projects like SystemML and TensorFlow are bridging the gap between the information and the insights you can get with that data. With transactions — Open Source is of course behind the hyperledger and re-establishing what a transaction is through Blockchain. If it’s hot and if it matters in the enterprise stack today–chances are it’s Open Source.

Linux obviously was a huge part of IBM’s heritage with Open Source. Tell us a little bit about the modern outlook of the company as Open Source has grown up so much, and the role that IBM sees itself playing in the community?

Diaz: In this second renaissance of Open Source that we’re in today, IBM and our partners have clear centers of gravity around cloud, data, AI, and transactions. It’s very important as an industry to create these centers of gravity. When you’re creating an open platform for cloud, data and AI, it’s important that you’re bringing together communities — where code bases, use cases and developers are all equal. That’s how you create a great platform as vendor too. That’s how you create an environment where everyone can benefit. Open Source innovation around cloud, data and AI is really the outlook for how IBM built our cloud. We’re trying to make the IBM Cloud the best platform for any Open Source developer.

And how you consume Open Source is as important as the code you write. IBM has been doing Open Source since the 1990s. It’s a huge part of our strategy. From the days of Linux, Eclipse, Apache, to where we are now — we have the IBM Open Source Way. It’s how we culturally think about and leverage Open Source. It describes to the world how we at IBM do Open Source at scale. We don’t just use Open Source, we contribute as much as we use. And in the Open Source Way, we talk about how we have operationalized Open Source at scale across IBM. These methods are best practices that any enterprise that wants to learn how to do Open Source should take to heart.

For example, what is the Open Source etiquette? Don’t be a talker, be a doer. Don’t flaunt your title, don’t be a drive by committer, start small and earn trust. Most importantly, be authentic. Anybody that wants to build an Open Source program or be a citizen of Open Source should take a look at this. How you behave in Open Source is just as important as the code that you build.

There are a lot of great events out there. Let’s hear why any enterprise developers that care about Open Source should be at the Index event next month?

Diaz: It’s a great opportunity for developers who are embedded in the second renaissance of Open Source to meet their peers in these centers of gravity around cloud, data and AI. It’s a vendor neutral event, where we will be bringing together developers across the globe to build these Open Source technologies. Everything from Kubernetes to OpenAPI, Tensorflow, Spark — every Open Source community. It’s a great opportunity for you to go, participate, very inexpensive, developer-to-developer, no marketing. Index will be a conference to learn about these technologies and their place in Open Source.

Don’t miss the opportunity to join the conversation February 20-22 at INDEX.

How to Get Started Using WSL in Windows 10

In the previous article, we talked about the Windows Subsystem for Linux (WSL) and its target audience. In this article, we will walk through the process of getting started with WSL on your Windows 10 machine.

Prepare your system for WSL

You must be running the latest version of Windows 10 with Fall Creator Update installed. Then, check which version of Windows 10 is installed on your system by searching on “About” in the search box of the Start menu. You should be running version 1709 or the latest to use WSL.

Here is a screenshot from my system.

kHFKOvrbG1gXdB9lsbTqXC4N4w0Lbsz1Bul5ey9m

If an older version is installed, you need to download and install the Windows 10 Fall Creator Update (FCU) from this page. Once FCU is installed, go to Update Settings (just search for “updates” in the search box of the Start menu) and install any available updates.

Go to Turn Windows Features On or Off (you know the drill by now) and scroll to the bottom and tick on the box Windows Subsystem for Linux, as shown in the following figure. Click Ok. It will download and install the needed packages.

oV1mDqGe3zwQgL0N3rDasHH6ZwHtxaHlyrLzjw7x

Upon the completion of the installation, the system will offer to restart. Go ahead and reboot your machine. WSL won’t launch without a system reboot, as shown below:

GsNOQLJlHeZbkaCsrDIhfVvEoycu3D0upoTdt6aN

Once your system starts, go back to the Turn features on or off setting to confirm that the box next to Windows Subsystem for Linux is selected.

Install Linux in Windows

There are many ways to install Linux on Windows, but we will choose the easiest way. Open the Windows Store and search for Linux. You will see the following option:

YAR4UgZiFAy2cdkG4U7jQ7_m81lrxR6aHSMOdED7

Click on Get the apps, and Windows Store will provide you with three options: Ubuntu, openSUSE Leap 42, and SUSE Linux Enterprise Server. You can install all three distributions side by side and run all three distributions simultaneously. To be able to use SLE, you need a subscription.

In this case, I am installing openSUSE Leap 42 and Ubuntu. Select your desired distro and click on the Get button to install it. Once installed, you can launch openSUSE in Windows. It can be pinned to the Start menu for quick access.

4LU6eRrzDgBprDuEbSFizRuP1J_zS3rBnoJbU2OA

Using Linux in Windows

When you launch the distro, it will open the Bash shell and install the distro. Once installed, you can go ahead and start using it. Simple. Just bear in mind that there is no user in openSUSE and it runs as root user, whereas Ubuntu will ask you to create a user. On Ubuntu, you can perform administrative tasks as sudo user.

You can easily create a user on openSUSE:

# useradd [username]

# passwd [username]

Create a new password for the user and you are all set. For example:

# useradd swapnil

# passwd swapnil

You can switch from root to this use by running the su command:

su swapnil

You do need non-root use to perform many tasks, like using commands like rsync to move files on your local machine.

The first thing you need to do is update the distro. For openSUSE:

zypper up

For Ubuntu:

sudo apt-get update

sudo apt-get dist-upgrade

7cRgj1O6J8yfO3L4ol5sP-ZCU7_uwOuEoTzsuVW9

You now have native Linux Bash shell on Windows. Want to ssh into your server from Windows 10? There’s no need to install puTTY or Cygwin. Just open Bash and then ssh into your server. Easy peasy.

Want to rsync files to your server? Go ahead and use rsync. It really transforms Windows into a usable machine for those Windows users who want to use native Linux command linux tools on their machines without having to deal with VMs.

Where is Fedora?

You may be wondering about Fedora. Unfortunately, Fedora is not yet available through the store. Matthew Miller, the release manager of Fedora said on Twitter, “We’re working on resolving some non-technical issues. I’m afraid I don’t have any more than that right now.”

We don’t know yet what these non-technical issues are. When some users asked why the WSL team could not publish Fedora themselves — after all it’s an open source project — Rich Turner, a project manager at Microsoft responded, “We have a policy of not publishing others’ IP into the store. We believe that the community would MUCH prefer to see a distro published by the distro owner vs. seeing it published by Microsoft or anyone else that isn’t the authoritative source.”

So, Microsoft can’t just go ahead and publish Debian or Arch Linux on Windows Store. The onus is on the official communities to bring their distros to Windows 10 users.

What’s next

In the next article, we will talk about using Windows 10 as a Linux machine and performing most of the tasks that you would perform on your Linux system using the command-line tools.

Things To Know About Three Upcoming Cloud Technologies

In the recent times, three noteworthy trends have been seen in cloud computing. And we have seen how microservices has risen, and the public cloud has emerged as a new open source cloud computing projects. The particular projects influence public cloud elasticity and help to a great extent in designing applications.

Knowing The Market

Previously, in cloud computing, a migration of applications to Azure, Google and Amazon Web Services was observed. Practically, applications which run on hardware in the private data centers can be virtualized and installed in the cloud. The present scenario is that the cloud market has become mature. Thus, more applications are now written and directly installed to the cloud. 

1.    Cloud Native Applications

If you search that what native cloud applications mean, you will see there is no textbook definition. In simple words, it means the applications are designed in such a way that it is capable of scaling thousand of nodes and can run on modern distributed systems environment.  It has been observed that many organizations, be it small or large, are moving to the cloud as there are innumerable benefits associated with it.  Let’s ponder on the design pattern of the applications.

Before the emergence of cloud, we saw virtualization played a pivotal role where the operating systems were portable and were inside the virtual machines.  In this way, depending on the compatibility with hypervisors such as KVM, Vmware or Xen project, the machine could move from one server to the other. In the recent times, the abstraction level has been seen at the application level that all applications are container based as well as run in portable units which can move from server to server with ease in spite of hypervisor all because of container technologies such as Core OS and Docker.

2.    Containers  

The container is the recent addition in the cloud technologies, noteworthy Core OS and Docker. These applications are nothing but the evolution of earlier innovation which includes Linux control groups (c-groups) as well as LXC, thus making the application portable. It allows the applications to move to production from development environment without reconfiguration.

The application is now installed from registers as well as through continuous operation systems to containers which are organized using tools such as Puppet, Chef or Ansible.

Ultimately, to scale out the applications, the schedulers like Docker Swarm, Mesos, Kubernetes, and Diego synchronize the containers across nodes and machines.

3.    Unikernels

Unikernels are also an upcoming technology with similarity to containers. A unikernel is a paired down OS, combined with the single application into a unikernel application that runs inside a virtual machine. Unikernels at times is also known as library operating system, as it includes libraries which allow applications to use hardware and network protocols in amalgamation with a set of policies for isolation of network layer and access control. In the 90’s, systems were known as Nemesis and Exokernel, in the present time unikernels include Osv and Mirage OS. You will be happy to know that unikernel application can be installed across various environments. Unikernel is capable of creating highly specialized as well as isolated services, and in the recent times, it has been used increasingly for applications development in microservices architecture.

GitHub Predicts Hottest 2018 Open Source Trends

According to the GitHub’s announcement of its findings, the company looked at three different types of activity. It identified the top 100 projects that had at least 2,000 contributors in 2016 and experienced the largest increase in contributors in 2017. It also identified the top 100 projects that received the largest increase in visits to the project’s repo in 2017. It also identified the top 100 projects that received the most new stars in 2017. Combining these lists, the company grouped projects into broad communities, looking towards the communities that were the most represented at the top of the lists.

The hottest project and community results in 2017, then, would logically foretell growth areas and trends for the coming year. This is what emerged:

Read more at The New Stack

Rookie’s Guide to Ethereum and Blockchain

What is Blockchain?

“The blockchain is an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions but virtually everything of value.” — Don & Alex Tapscott, authors Blockchain Revolution (2016)

Blockchain is the digital and decentralized ledger that records all transactions. (See the “Blockchain Simplified” video for more information.) Every time someone buys digital coins on a decentralized exchange, sells coins, transfers coins, or buys a good or service with virtual coins, a ledger records that transaction, often in an encrypted fashion, to protect it from cybercriminals. These transactions are also recorded and processed without a third-party provider, which is usually a bank.

A distributed database

Picture a spreadsheet that is duplicated thousands of times across a network of computers. Then imagine that this network is designed to regularly update this spreadsheet and you have a basic understanding of the blockchain.

Information held on a blockchain exists as a shared — and continually reconciled — database. This is a way of using the network that has obvious benefits. The blockchain database isn’t stored in any single location, meaning the records it keeps are truly public and easily verifiable. No centralized version of this information exists for a hacker to corrupt. Hosted by millions of computers simultaneously, its data is accessible to anyone on the Internet.

1*y-obRKeDW7f1-IxP1Z4-Jg.png

Why was blockchain invented?

The main reason we even have this cryptocurrency and blockchain revolution is as a result of the perceived shortcomings of the traditional banking system. What shortcomings, you ask? For example, when transferring money to overseas markets, a payment could be delayed for days while a bank verifies it. Many would argue that financial institutions shouldn’t tie up cross-border payments and funds for such an extensive amount of time.

Likewise, banks almost always serve as an intermediary of currency transactions, thus taking their cut in the process. Blockchain developers want the ability to process payments without a need for this middleman.

A network of so-called computing “nodes” make up the blockchain.

Node is a computer connected to the blockchain network using a client that performs the task of validating and relaying transactions gets a copy of the blockchain, which gets downloaded automatically upon joining the blockchain network.

Together they create a powerful second-level network, a wholly different vision for how the internet can function.

Every node is an “administrator” of the blockchain, and joins the network voluntarily (in this sense, the network is decentralized). However, each one has an incentive for participating in the network: the chance of winning Bitcoins.

Nodes are said to be “mining” Bitcoin, but the term is something of a misnomer. In fact, each one is competing to win Bitcoins by solving computational puzzles. Bitcoin was the raison d’etre of the blockchain as it was originally conceived. It’s now recognized to be only the first of many potential applications of the technology.

There are an estimated 700 Bitcoin-like cryptocurrencies (exchangeable value tokens) already available. As well, a range of other potential adaptations of the original blockchain concept are currently active, or in development.

What are the applications of blockchain?

The nature of blockchain technology has got imaginations running wild, because the idea can now be applied to any need for a trustworthy record. It is also putting the full power of cryptography in the hands of individuals, stopping digital relationships from requiring a transaction authority for what are considered ‘pull transactions’.

For sure, there is also a lot of hype. This hype is perhaps the result of how easy it is to dream up a high-level use case for the application of blockchain technology. It has been described as ‘magic beans’ by several of the industry’s brightest minds.

There is more on how to test whether blockchain technology is appropriate for a use case or not here — “Why Use a Blockchain?” For now, let’s look at the development of blockchain technology for how it could be useful.

As a system of record

Digital identity

Cryptographic keys in the hands of individuals allow for new ownership rights and a basis to form interesting digital relationships. Because it is not based on accounts and permissions associated with accounts, because it is a push transaction, and because ownership of private keys is ownership of the digital asset, this places a new and secure way to manage identity in the digital world that avoids exposing users to sharing too much vulnerable personal information.

Tokenization

Tokens are used as to bind the physical and digital worlds. These digital tokens are useful for supply chain management, intellectual property, and anti-counterfeiting and fraud detection.

Inter-organizational data management

Blockchain technology represents a revolution in how information is gathered and collected. It is less about maintaining a database, more about managing a system of record.

For governments

Governments have an interest in all three aspects components of blockchain technology. Firstly, there’s the ownership rights surrounding cryptographic key possession, revocation, generation, replacement, or loss. They also have an interest in who can act as part of a blockchain network. And they have an interest in blockchain protocols as they authorize transactions, as governments often regulate transaction authorization through compliance regimes (eg stock market regulators authorize the format of market exchange trades).

For this reason, regulatory compliance is seen as a business opportunity by many blockchain developers.

For financial institutions

For audit trails

Using the client-server infrastructure, banks and other large institutions that help individuals form digital relationships over the internet are forced to secure the account information they hold on users against hackers.

Blockchain technology offers a means to automatically create a record of who has accessed information or records, and to set controls on permissions required to see information.

This also has important implications for health records.

As a platform

For smart contracting

Blockchains are where digital relationships are being formed and secured. In short, this version of smart contracts seeks to use information and documents stored in blockchains to support complex legal agreements.

Other startups are working on sidechains — bespoke blockchains plugged into larger public blockchains. These ‘federated blockchains’ are able to overcome problems like the block size debate plaguing bitcoin. It is thought these groups will be able to create blockchains that authorize super-specific types of transactions.

Ethereum takes the platform idea further. A new type of smart contracting was first introduced in Vitalik Buterin’s white paper, “A Next Generation Smart Contract and Decentralized Application Platform”. This vision is about applying business logic on a blockchain, so that transactions of any complexity can be coded, then authorized (or denied) by the network running the code.

As such, ethereum’s primary purpose is to be a platform for smart contract code, comprising of programs controlling blockchain assets, executed by a blockchain protocol, and in this case running on the ethereum network.

For automated governance

Bitcoin itself is an example of automated governance, or a DAO (decentralized autonomous organization). It, and other projects, remain experiments in governance, and much research is missing on this subject.

For markets

Another way to think of cryptocurrency is as a digital bearer bond. This simply means establishing a digitally unique identity for keys to control code that can express particular ownership rights (eg it can be owned or can own other things). These tokens mean that ownership of code can come to represent a stock, a physical item or any other asset.

For automating regulatory compliance

Beyond just being a trusted repository of information, blockchain technology could enable regulatory compliance in code form — in other words, how blocks are made valid could be a translation of government legal prose into digital code. This means that banks could automate regulatory reporting or transaction authorization.

What is Ethereum?

Ethereum is the second-largest blockchain. It’s much smaller than bitcoin — its cryptocurrency token, ether, has a market cap of $42 billion, compared to bitcoin’s quarter of a trillion dollars — but Ethereum can integrate smart contracts onto its blockchain. “So if I upload a program, let’s say a bet, I escrow some money into it, you escrow some money into it, and then a third party lets us know whether the Chicago Bulls beat the New York Knicks or vice versa, resolving our bet,” explains Joe Lubin, one of Ethereum’s founders.

Ethereum isn’t meant to be just a cryptocurrency like bitcoin, according to Lubin, but a full enterprise platform onto which programmers can build applications for any number of things. Despite that, one ether went from being worth $8 in January to being worth $434 in December as investors began to sense the enormous sums of money to be made.

Like Bitcoin, Ethereum is a distributed public blockchain network. Although there are some significant technical differences between the two, the most important distinction to note is that Bitcoin and Ethereum differ substantially in purpose and capability. Bitcoin offers one particular application of blockchain technology, a peer to peer electronic cash system that enables online Bitcoin payments. While the Bitcoin blockchain is used to track ownership of digital currency (bitcoins), the Ethereum blockchain focuses on running the programming code of any decentralized application.

In the Ethereum blockchain, instead of mining for bitcoin, miners work to earn Ether, a type of crypto token that fuels the network. Beyond a tradeable cryptocurrency, Ether is also used by application developers to pay for transaction fees and services on the Ethereum network.

1*L44BsgAuaMmqqluf8cc7rw.jpeg

Bitcoin can be described as digital money. Bitcoin has been around for eight years and is used to transfer money from one person to another. It is commonly used as a store of value and has been a critical way for the public to understand the concept of a decentralized digital currency.

Ethereum is different than Bitcoin in that it allows for smart contracts which can be described as highly programmable digital money. Imagine automatically sending money from one person to another but only when a certain set of conditions are met. For example an individual wants to purchase a home from another person. Traditionally there are multiple third parties involved in the exchange including lawyers and escrow agents which makes the process unnecessarily slow and expensive. With Ethereum, a piece of code could automatically transfer the home ownership to the buyer and the funds to the seller after a deal is agreed upon without needing a third party to execute on their behalf.

Application of Ethereum

In addition to being a great investment coin, Ethereum is a platform that enables what has come to be known as “web 3.0”. “Web 2.0” (the internet as we know it) is based on centralized servers; to have access to the internet and most of the services therein, we have to rely on third-party servers. These servers charge us fees and collect our data (often against our will).

With Ethereum, there are no centralized servers. Instead, Ethereum runs on blockchain technology, a kind of distributed ledger technology that is upheld by a network of thousands of different computers, called “nodes.” In exchange for performing the duties that secure the network and verify the transactions that take place on it, the nodes receive rewards in the form of ETH tokens.

Ethereum was the first cryptocurrency that was built to function primarily as a settlement layer. This means that it was designed as a platform for other things to be developed on. In contrast, Bitcoin was designed initially to transact as a form of “digital cash” (though, over time, Bitcoin has become more of a settlement layer itself).

What are smart contracts and how does Ethereum work?

Ethereum’s smart contracts have been famously explained by Nick Szabo as being similar to a vending machine.

When you use a vending machine, you put money in, press a button, and receive your candy. It’s all automated; there is no third-party that needs to come and make sure that you inserted your coin or unlock the door to give the candy to you.

Smart contracts operate similarly, except they usually have nothing to do with candy; rather, they can be applied to tons of different scenarios across different industries. Smart contracts take the “middle man” out of a variety of legal and financial services.

For example, Hermione is buying a home to move into immediately and needs to set up a mortgage contract with the seller. In today’s world, she must go to a bank or use another third-party service to draw up the contract. The third-party charges fees and the process takes a while–time and money that Harriet doesn’t have.

Using a smart contract, a legal mortgage contract could be set up without involving any third party. The loan’s data would be stored on the blockchain, and anytime that the borrower missed a payment, the keys to the loan’s collateral would be automatically revoked.

Using Smart Contracts to Build Dapps

Ethereum can be (and is) used as a platform for developing “dapps,” another name for “decentralized apps.” There are a variety of benefits of using dapps as opposed to traditional applications, including:

  • Dapps are decentralized–they are not run by any singular entity. This means that the third-parties that are an inherent part of the structure of traditional applications do not exist. You know, the third-parties that exist to charge fees or to collect your personal data?
  • Ethereum-based dapps know the pseudonymous identity of each userIn other words, dapp users will never have to provide personal information to register or use dapps; in fact, users will not have to register at all. Instead, the dapp will operate using your pseudonymous identity within the Ethereum network.
  • All payments are processed within the Ethereum network. There is no need to integrate PayPal, Stripe, or any other third-party means of payment into Ethereum-based dapps. Payments will be securely sent and received on the Ethereum network.
  • The blockchain holds all the data. While some extraneous data may be held outside of the blockchain, everything that needs to be kept secure will be stored forever (immutably) within the blockchain. Additionally, any logs created within dapps are held on the blockchain, making public data easily searchable.
  • The front- and back-end code is open source. You can independently verify that the dapp you’re using is secure and that there is no malicious code.

Ethereum vs. other smart contract alternatives

Ethereum has a long road ahead if it wants to achieve its ambition of becoming the world’s “decentralized computer.” Even Vitalik Buterin, the creator of Ethereum, doubts its current ability to scale, saying, “Scalability [currently] sucks; the blockchain design fundamentally relies on bottlenecks where individual nodes must process every single transaction in the entire network.”

He’s correct. The Ethereum blockchain keeps getting bigger, and exhibits an increasingly large footprint for the hardware of miners and users alike. Additionally, its relatively outdated algorithmic programming makes inefficient use of the chain’s processing power, and returns a dismal number of transactions per second. This is a problem for businesses who rely on Ethereum smart contracts and impacts its future applicability and price. Fortunately, there are other smart contract platforms built on blockchain that are working to evolve the concept further.

1. QTUM

One of the most promising contenders for Ethereum’s title is QTUM, a hybrid cryptocurrency technology that takes the best attributes of bitcoin and Ethereum before blending them together. The result is a solution that resembles bitcoin core, but also includes an Abstract Accounting Layer that gives QTUM’s blockchain smart contract functionality via a more robust x86 Virtual Machine.

Essentially this is an off-layer scaling solution akin to what bitcoin seeks in SegWit and the Lightning Network, combined with the ability to build and host smart contracts. This has made QTUM a popular destination for developers, who appreciate the protective clauses installed in the platform that make it nigh impossible to commit the kinds of coding infractions that might one day become a multi-million-dollar problem. They also appreciate the presence of second-layer storage, despite its implications on decentralization, because stable business applications are their primary desire, as well they should be.

2. Ethereum Classic

The first hard fork that the cryptocurrency community witnessed was Ethereum forking from Ethereum Classic in 2013, which created a new prototype with ambitions to fill the gaps in Ethereum’s code. The controversy surrounded a hack where one individual stole over $50 million in ETH from a smart contract that was holding them in escrow as part of the original DAO (Decentralized Autonomous Organization) project.

After the hacker created a glitch that withdrew ETH from users instead of depositing it, the community voted to create a new chain that was backwards-compatible with the old one, so that mistakes like these could be reversed, and coins returned to their rightful owners. The hard fork installed a new update to the old Ethereum’s code which made it impossible to backtrack, even in the case of heinous breaches, of which there have been several. Ethereum Classic is continually being upgraded in this manner, thanks to a vibrant and active community, and keeps on pace with other projects despite its age.

3. NEO

NEO is what people like to refer to as “China’s Ethereum,” and for good reason. First, the two are very similar, and bill themselves as hosts of decentralized applications (dApps), ICOs, and smart contracts. They’re both open source, but while Ethereum is supported by a democratic foundation of developers, NEO has the full backing of China’s government. This has made it popular domestically but also abroad, and for its unique value proposition as well.

NEO uses a more energy-efficient consensus mechanism called dBFT (decentralized Byzantium Fault Tolerant) instead of proof-of-work, making it much faster at a rate of 10,000 transactions per second. Moreover, it supports more computer languages than Ethereum. People can build dApps with Java, C#, and soon Python and Go, making this option accessible to startups with big ideas while helping to add to its long-term viability.

4. Cardano

One of the newest entries into the smart contract platform contest, Cardano is a dual-layer solution, but with a unique twist. The platform has a unit of account and a control layer that governs the use of smart contracts, recognizes identity, and maintains a degree of separation from the currency it supports.

Cardano is programmed in Haskell, a language best suited for business applications and data analysis, making its future applications likely to be financial or organizational. This ideal blend of public sector usability and privacy protection makes Cardano a potentially groundbreaking solution, but it’s still very young. While the developer team’s use of deliberate, airtight scientific methodology make progress slow, it will likely be void of any parity or security mistakes that are an unfortunate reality in its more haphazardly assembled peers.

Mitali Sengupta is a former digital marketing professional, currently enrolled as a full-stack engineering student at Holberton School. She is passionate about innovation in AI and Blockchain technologies.. You can contact Mitali on TwitterLinkedIn or GitHub.

AT&T Sharpens Edge With New Open Source Effort, Test Lab Launch

AT&T is continuing its aggressive edge computing push, today announcing that its first test zone for edge applications is up and running at its AT&T Foundry in Palo Alto, Calif., and that it is creating a new open source project focused on automated, distributed cloud infrastructure for carrier and enterprise networks. (See AT&T Turns Up Edge Computing Test Lab and Linux Foundation, AT&T Launch Akraino.)

According to a blog post by Melissa Arnoldi, senior executive vice president of technology and operations at AT&T Inc. (NYSE: T), the open source group, named Akraino, will be hosted by the Linux Foundation and will focus on “expanding the development of next generation zero-touch edge cloud infrastructure for carrier and enterprise networks,” …

Read more at LightReading

Understanding SELinux Labels for Container Runtimes

“I’ve just started to deal with some software that is containerized via Docker, and which is ordinarily only ever run on Ubuntu. Naturally this means nobody ever put any thought into how it will interact with SELinux.

“I know that containers get a pair of randomly chosen MCS [Multi-Category Security] labels by default, and that the files they create obviously end up with those same categories. However, when it’s time to rebuild or upgrade the container, the files are now inaccessible because the new container has a different pair of categories.

“Are we supposed to relabel these files with the new categories? Or do we have to pick the categories ourselves and then use Docker’s --security-opt option when we run the container? How do we do so without risk that some other container will end up with the same categories?”

Regarding the first question, when a container runtime like Docker, as well as some of the new ones we have been working on—podmanCRI-O, and Buildah—create a container, they pick a random MCS label to run the container. The MCS labels consist of two random numbers between 0 and 1,023 and have to be unique. They are prefixed with a c or category. SELinux also needs a sensitivity level s0.

Read more at OpenSource.com

What is LLVM? The Power Behind Swift, Rust, Clang, and More

LLVM makes it easier to not only create new languages, but to enhance the development of existing ones. It provides tools for automating many of the most thankless parts of the task of language creation: creating a compiler, porting the outputted code to multiple platforms and architectures, and writing code to handle common language metaphors like exceptions. Its liberal licensing means it can be freely reused as a software component or deployed as a service.

The roster of languages making use of LLVM has many familiar names. Apple’s Swift language uses LLVM as its compiler framework, and Rust uses LLVM as a core component of its tool chain. Also, many compilers have an LLVM edition, such as Clang, the C/C++ compiler (this the name, “C-lang”), itself a project closely allied with LLVM. And Kotlin, nominally a JVM language, is developing a version of the language called Kotlin Native that uses LLVM to compile to machine-native code.

Read more at InfoWorld

Linux: To Recurse or Not

Linux and recursion are on very good speaking terms. In fact, a number of Linux commands recurse without ever being asked, while others have to be coaxed with just the right option.

When is recursion most helpful and how can you use it to make your tasks easier? Let’s run through some useful examples and see.

Easy recursion with ls

First, the ls command seems like a good place to start. This command will only list the files and directories in the current or specified directory unless asked to work a little harder. It will include the contents of directories only if you add a -R option. 

Read more at Network World

Find Large Files in Linux

In today’s tutorial we are going to show you how to find large files in Linux. One of the most common things you will do as a Linux system administrator is finding unneeded large files that consume disk space, and removing them to free up space for applications that actually need it. Let’s dive in and find out how we can find, large files in Linux.

1. Finding largest directories and files in Linux

First we are going to look at how we can find the largest directories and files in linux combined, execute the following command to find the top 10 largest directories and files on your Linux server:

# du -ah /* 2>/dev/null | sort -rh | head -n 10

Read more at RoseHosting Blog