Home Blog Page 482

Programming Paradigms and the Procedural Paradox

Programming paradigms are different perspectives on solving a problem with software. Each of the paradigms is valuable. But they seem so hard to define. People will discuss endlessly what each paradigm means, trying to be inclusive of what they consider important and what they don’t. To take an example, we get definitions of functional programming which are satisfying to the definer but not to everyone. And we get people pointing fingers, saying “that’s not real object-oriented programming”. These discussions are unsatisfying because they rehash the same tired ideas and never reach any firm conclusions.

I’d like to take a broader perspective and try to shed some light on why they are so hard to define. That might help me understand how to define them well.1

Most of the time, programming paradigms are described in terms of their features or constraints. I think this is a useful perspective. Languages associated with a paradigm often share many features. For instance, functional languages typically have first-class functions.

But there is a much better way to think of the paradigms that doesn’t reduce them to lists of features. Each of the major paradigms is a wholistic approach to solving problems with code. The paradigms are frameworks containing basic assumptions, ways of thinking, and methodology

Read more at Dev.to

Android Oreo Adds Linux Kernel Requirements and New Hardening Features

The Linux kernel continues to add security protections so developers don’t have to build them on their own. As a result, one of the first steps security experts recommend for protecting against embedded Linux malware threats is to work with the latest possible kernel release and then regularly update field devices. Now that Android is getting long in the tooth — it was nine years ago this month that Sergey Brin and Larry Page rollerbladed onto the stage to announce the debut of the flagship HTC G1 phone — more and more Android devices are being attacked due to out-of-date Linux kernels. To address the problem before it adds to Android’s substantial challenge with malware generated from rogue or unprotected apps, Google has announced new requirements in Android 8.0 (“Oreo”) to build on Linux kernels no older than kernel 4.4.

These new requirements, which were revealed after last week’s launch of Android 8.0, are intended to support its Project Treble technology for speeding firmware updates. Oreo has also backported several kernel hardening features from upstream Linux kernels. In the coming years, Google may well be tapping security enhancements built into this week’s release of Linux 4.13 – the 13th version of the 4.x kernel – which updates its SMB support and adds Transport Layer Security support (see farther below).

Android Oreo kernel requirements and Project Treble

Google’s first minimum Linux kernel requirements for Android were posted last week on the Android Source page and revealed by Doug Lynch on XDA-Developers. Any new SoC that ships in 2017 or later that appears on an Android 8.0 device must have a Linux 4.4 or higher kernel, says Google. Oreo-based products with older SoCs must start with Linux 3.18 or higher, which is generous considering Kernel 3.18 is listed by kernel.org as EOL.

There are no requirements for recent Linux kernels on older devices that are upgrading to Oreo. In addition, Android Open Source Project (AOSP) code for Oreo is available without any requirements for those who don’t need Android branding and access to Google Services.

On its requirements page, Google notes: “Regardless of launch date, all SoCs with device launches on Android O remain subject to kernel changes required to enable Treble.” Here, Google is referring to Project Treble, which formally debuts in Oreo. This modularization of Android is intended add some separation between the lower-level, device specific firmware written by chip manufacturers and the main OS framework.

Project Treble is implemented via a new Vendor Interface that is validated with a Vendor Test Suite (VTS). These tools give silicon makers a more detailed requirement spec for booting a new Android release so they can speed testing.

More importantly, device vendors can now update the main part of the Android framework without waiting around for SoC vendors to update their lower-level code. This should lead to faster Android software updates for customers. The hitch is that Project Treble code involves restructuring low-level, hardware specific drivers, which include extensive SoC support, so it’s likely to take several years before it affects OS update times on most Android devices.

Android 8.0 adds Linux kernel hardening

Recent Linux kernels have added kernel hardening features to help keep up with increasingly sophisticated malware schemes. As revealed on the Android Developers Blog, Android 8.0 backports four of these features from upstream Linux kernels ranging from Linux 4.4 to 4.8.

The kernel protections should help developers building Android hardware drivers to more easily detect kernel security bugs. Some 85 percent of kernel vulnerabilities in Android are due to bugs in vendor drivers, which represented a third of Android security bugs in 2016, according to Google estimates.

The key improvement is Linux 4.4’s Kernel Address Space Layout Randomization (KASLR), which randomizes the location where kernel code is loaded on each boot. KASLR has been backported to Linux 4.4 or higher kernels running in Android, while the other three features are backported to Linux 3.18.

Oreo also implements Linux 4.8’s “hardened usercopy” scheme, which protects usercopy functions that help transfer data between user space and kernel space memory. A “Privileged Access Never” (PAN) emulation borrowed from Linux 4.10’s ARM64 code helps prevent 64-bit ARM kernels from accessing user space memory directly. Finally, there’s a Linux 4.6 hardening feature called “Post-init read-only memory” that restricts a memory region to read-only mode after the kernel has been initialized to reduce the kernel’s attack surface.

Aside from these kernel protections, Oreo app security will benefit from a Google Play Protect service that is rolling out to Android 8.0 and older builds. Google Play Protect scans incoming and installed apps for malware, and sends notifications if it sees anything suspicious.

Security enhancements aside, Android 8.0 offers performance improvements including 2.5 times faster boot-up and smoother background activity management. The new release has borrowed the picture-in-picture (PIP) video mode from Android TV, and has made enhancements to autofill functionality, emojis, battery life, and Bluetooth audio.

Oreo features a major redesign of notifications, including the addition of user-customizable notification channels for easier management. Other notification changes include a snooze mode to temporarily keep notifications at bay, as well as the addition of “dots” on app launcher icons to show which notifications have not been acted upon.

Linux 4.13 gets tough with SMB

This week’s Linux 4.13 kernel release extends the trend of adding security functionality. According to a Linux.com story by Paul Brown, the biggest change concerns the SMB (Server Message Block) network access protocol. Linux 4.13 switches its default to SMB3 instead of the aging, vulnerable SMB1. Earlier this year, widespread use of SMB1 in Linux servers helped fuel the expansion of the WannaCry ransomware.

Linux 4.13 also adds support for Transport Layer Security, (TLS) the successor to Secure Sockets Layer (SSL). TLS is far more secure than SSL, but it consumes more CPU resources, so adding it to the kernel should speed things up.

Other Linux 4.13 improvements include new support for HDMI Stereo 3D output, as well as support for non-blocking buffered I/O operations to improve asynchronous I/O. The EXT4 file system has been tweaked to allow an EXT4 directory to scale to 2 billion entries.

There’s also support for Intel’s upcoming Coffee Lake (8th Gen Core) CPUs, which will succeed the current Kaby Lake while retaining the same 14nm foundation. Linux 4.13 also adds some prep for the next-gen, 10nm Intel Cannon Lake architecture due later in 2018. Finally, there’s new native support for several ARM hacker boards, including the BeagleBone Blue, NanoPi M1 Plus, NanoPi Neo2,  LicheePi Zero dock board, Orange Pi Prime, Orange Pi Zero Plus 2, and SoPine SoM.

Connect with the embedded Linux community at Embedded Linux Conference in Prague. You can view the schedule hereLinux.com readers receive an additional $40 off with code OSSEULDC20. Register Now!

Blockchain Technology Is Changing How Business Is Done

This article was sponsored by IBM and written by Linux.com.

Blockchain technology is changing the way businesses record and monitor data transactions. It provides a decentralized, immutable ledger, or record of transactions, that effectively verifies the integrity of data in that ledger.

The Hyperledger Project is a collaborative open source development effort built on the goal of advancing blockchain technologies. Hyperledger now has more than 140 members and hosts several blockchain-related projects. These include Hyperledger Fabric a blockchain framework implementation and Hyperledger Composer a toolset for integrating blockchain applications within existing business systems.

In this article, you’ll hear from Ivan Vankov of Cognition Foundry, an IBM LinuxONE business partner working with customers on implementing Hyperledger, about the principles of the distributed ledger model, how blockchain technology can help eliminate data forgery and verify supply chains, and how Hyperledger is changing the way business is done. To find out more, visit www.hyperledger.org or www.ibm.com/linuxone/solutions/blockchain-technology.

Linux.com: Briefly describe Hyperledger in your own words.

Ivan Vankov: The shortest definition is that Hyperledger is a blockchain designed not for cryptocurrency but for business needs. It takes all the good parts from the blockchain model, adds a ledger inside the blockchain, and has the ability to create smart contracts that manage the ledger. This is governed by a very strong and flexible authorization/authentication system based on state-of-the-art cryptography. However, this is not enough for the business — scalability, data isolation and separation, interfacing with existing systems, replacement of core components, and many more features are available with Hyperledger. Since by design it is a distributed system, there is no central point of failure.

Ivan Vankov, Chief Consultant Architect at Cognition Foundry

The ledger is immutable, and every member inside the network has a copy on their servers, information is synchronized automatically without the possibility of divergence. Even if one or more participants alter their copies of the data, this will not affect the process because blockchain will see the altered data and will refuse to use it in any process. Hyperledger is able to prove which is the real data without any doubts. Data forgery is not possible inside Hyperledger. The main consequence is that members inside a Hyperledger network can have trust between each other since nothing can alter the data, and the data is updated only using smart contracts that are run by every single member of the network within their own infrastructure. If all smart contracts do not agree at the same time about a particular operation the operation will not be executed. Also Hyperledger can work in a completely private and isolated network.

Linux.com: What is the difference between Hyperledger and other systems like Ethereum. Why is it necessary?

Ivan Vankov: The main difference is that Hyperledger does not require mining, gas fees, or taxes. Many people think that blockchain is a cryptocurrency this is a wrong assumption. Hyperledger is a blockchain, not a cryptocurrency. Real businesses will not use a blockchain that requires constantly increasing needs of computational power just to process transactions. Mining is very useful in the context of cryptocurrency; however, it makes the throughput very low. For example, Ethereum operates in the range of a 1000 transactions per minute. Hyperledger operates in the range of 500 000 transactions per minute with ease and can be tuned to process more than a million transactions per minute.

Also Hyperledger is built for business needs; many decisions were made by using a real feedback from companies with different portfolios, including banks and financial institutions, logistic companies, medical and research teams, consulting companies, manufacturers, etc. The result is that Hyperledger is very flexible companies adapt the tool to their needs and not the other way around.

Linux.com: Describe some real-world use cases.

Ivan Vankov: The idea for distributed ledger is not new, but adding the ledger inside a blockchain with all other enterprise grade tools makes a huge difference. One of the first projects built using Hyperledger is Everledger, a system that tracks provenance for assets. Users can track the history of an item, previous owners, numismatic value, etc. A concrete example is when users want to buy a diamond, they can check whether it is a blood diamond, where and when it was mined, who polished it, etc. And, this is not a central database, it is distributed among all participants in the diamond trade.

Walmart is testing Hyperledger for food supply chain, and the first results are very positive. Let’s imagine that the pork meat imported from China is not with expected quality; who is responsible? The farm in China, the transporting company, the processing or storage facility? Using Hyperledger, every single step can be tracked without any doubt. This is a simplified example. Supply chains are complex, but Hyperledger can make the whole chain more efficient.

Transactions in Hyperledger are really fast, and a couple of banks use this property in pilot programs for micropayments and microtransactions. The cost of the transaction can be a fraction of the current cost, and the user satisfaction reports are more than impressive. SWIFT, the backbone for financial transactions, officially announced that will start testing Hyperledger. The final goal is to replace their current infrastructure and processes so they can be based on Hyperledger.

At IBM Interconnect 2017, a proof of concept (PoC) for distributed identity was shown. Companies can add user information but only if they verify the information. For example, banks can provide a user’s bank account and residence details; mobile operators can provide verified mobile number. This information is distributed and stored inside the Hyperledger network and if a user decides to rent a car, they can give temporal access to a particular subset of their data to the rental company, and the rental company can be sure about the validity of the data. Moreover, there is no need to expose an exact data. For example, if a user is required to be older than 21 years, the system can answer with yes or no without exposing the real age or birth date of that user. This is achieved using IBM Identity mix protocol.

Hyperledger will also play a huge role in the future of IoT. Currently, there are many PoC systems, but a model is beginning to appear and this model is working. In general, Hyperledger has a real value in models where data provenance is critical or where trust between parties must be enforced.

Linux.com: What will be required for integrating Hyperledger?

Ivan Vankov: The operating costs can be very low. Hyperledger is open source under Apache 2 license and can run using conventional hardware that companies already own or use. Integration needs to be considered; however, Hyperledger is designed to interface with existing systems. It makes no sense to completely replace the whole software stack. Hyperledger can be integrated using small incremental steps and only in places where it brings real value. This makes the whole integration process more manageable and predictable.

From a technical point of view, Hyperledger uses Docker containerization, so any proper DevOps tools and processes can be applied directly. From my experience, I can say that the most difficult and critical step is defining the points in the business process where Hyperledger will be placed. Not how but where! The next critical decision is what security model will be used, for example, every transaction will be directly linkable to the user that executes it or will be completely anonymous. There is a third option: transactions are anonymous but a specific entity can link transactions back to the users which is absolutely necessary for companies that are under regulations and where an audit can be executed at any point.

Linux.com: What gives you confidence that Hyperledger is here to stay?

Ivan Vankov: Hyperledger will not only stay, it will grow and expand. It brings real measurable benefit for the businesses and opens completely new possibilities for partnerships and interactions. Not only for B2B but also for B2C and for the very new and unexplored field of C2C.

Hyperledger will change the way how business is made in the same way that Internet changed it not so long ago. And, let’s not forget who is managing the project The Linux Foundation and that Hyperledger is backed up by companies like IBM, Intel, American Express, J.P. Morgan, Cisco, and many more that help define leading business systems.

Learn more about Hyperledger at Open Source Summit in Los Angeles. Check out the full schedule for Open Source Summit here. Linux.com readers save on registration with discount code LINUXRD5Register now!

Ivan Vankov is a software developer with more than 15 year of experience in back-end and service development using Java, Go, Python and Haskell. In his professional career he has been involved in projects building ERP systems, security using technologies from Machine Learning like deep reinforcement learning, and high precision GIS systems used in automotive industries. Follow Ivan on Twitter at @gatakka.

ContainerCon 2017 to Show Deepening Impact of Containers on Production Systems

“This year feels like the year containers came of age,” said Matt Butcher, Project Lead for Kubernetes Helm at Microsoft. As the program chair for the upcoming ContainerCon track at Open Source Summit in Los Angeles, Butcher sees a host of emerging trends and topics shaping the container landscape — for instance, network performance in systems like Mesos and Kubernetes, new and intriguing container security models, and the development of cloud-native (or cloud-first) applications.

Butcher believes that, as Microsoft invests more deeply into open source projects and the communities around them, the containerized application development community is taking a leap forward in maturity, making it feasible for container orchestration technology to finally be considered production ready. We sat down with Butcher to discuss some of the forthcoming highlights for this year’s ContainerCon.

Read more at The New Stack

The Role of OPNFV in Network Transformation

The Understanding OPNFV book takes an in-depth look at network functions virtualization (NFV) and provides a comprehensive overview of The Linux Foundation’s OPNFV project. In this article, we provide some excerpts from the book and discuss some organizational elements required to make your NFV transformation successful. These best practices stress how both technical and non-technical elements are required, with non-technical often being more critical.

According to the project’s website:

Open Platform for NFV (OPNFV) facilitates the development and evolution of NFV components across various open source ecosystems. Through system level integration, deployment and testing, OPNFV creates a reference NFV platform to accelerate the transformation of enterprise and service provider networks.

Read more at The Linux Foundation

3 Cool Linux Service Monitors

The Linux world abounds in monitoring apps of all kinds. We’re going to look at my three favorite service monitors: Apachetop, Monit, and Supervisor. They’re all small and fairly simple to use. apachetop is a simple real-time Apache monitor. Monit monitors and manages any service, and Supervisor is a nice tool for managing persistent scripts and commands without having to write init scripts for them.

Monit

Monit is my favorite, because provides the perfect blend of simplicity and functionality. To quote man monit:

monit is a utility for managing and monitoring processes, files, directories and filesystems on a Unix system. Monit conducts automatic maintenance and repair and can execute meaningful causal actions in error situations. E.g. Monit can start a process if it does not run, restart a process if it does not respond and stop a process if it uses too much resources. You may use Monit to monitor files, directories and filesystems for changes, such as timestamps changes, checksum changes or size changes.

Monit is a good choice when you’re managing just a few machines, and don’t want to hassle with the complexity of something like Nagios or Chef. It works best as a single-host monitor, but it can also monitor remote services, which is useful when local services depend on them, such as database or file servers. The coolest feature is you can monitor any service, and you will see why in the configuration examples.

Let’s start with its simplest usage. Uncomment these lines in /etc/monit/monitrc:

 set daemon 120
 set httpd port 2812 and
     use address localhost  
     allow localhost        
     allow admin:monit      

Start Monit, and then use its command-line status checker:

$ sudo monit
$ sudo monit status
The Monit daemon 5.16 uptime: 9m 

System 'studio.alrac.net'
  status                  Running
  monitoring status       Monitored
  load average            [0.17] [0.23] [0.14]
  cpu                     0.8%us 0.2%sy 0.5%wa
  memory usage            835.7 MB [5.3%]
  swap usage              0 B [0.0%]
  data collected          Mon, 04 Sep 2017 13:04:59

If you see the message “/etc/monit/monitrc:289: Include failed — Success ‘/etc/monit/conf.d/*'” that is a bug, and you can safely ignore it.

Monit has a built-in HTTP server. Open a Web browser to http://localhost:2812. The default login is admin, monit, which is configured in /etc/monit/monitrc. You should see something like Figure 1 (below).

Click on the system name to see more statistics, including memory, CPU, and uptime.

That is fun and easy, and so is adding more services to monitor, like this example for the Apache HTTP server on Ubuntu.

check process apache with pidfile /var/run/apache2/apache2.pid
    start program = "service apache2 start" with timeout 60 seconds
    stop program  = "service apache2 stop"
    if cpu > 80% for 5 cycles then restart
    if totalmem > 200.0 MB for 5 cycles then restart
    if children > 250 then restart
    if loadavg(5min) greater than 10 for 8 cycles then stop
    depends on apache2.conf, apache2
    group server    

Use the appropriate commands for your Linux distribution. Find your PID file with this command:

echo $(. /etc/apache2/envvars && echo $APACHE_PID_FILE)

The various distros package Apache differently. For example, on Centos 7 use systemctl start/stop httpd.

After saving your changes, run the syntax checker, and then reload:

$ sudo monit -t
Control file syntax OK
$ sudo monit reload
Reinitializing monit daemon

This example shows how to monitor key files and alert you to changes. The Apache binary should not change, except when you upgrade.

    check file apache2
    with path /usr/sbin/apache2
    if failed checksum then exec "/watch/dog"
       else if recovered then alert

This example configures email alerting by adding my mailserver:

set mailserver smtp.alrac.net

monitrc includes a default email template, which you can tweak however you like.

man monit is well-written and thorough, and tells you everything you need to know, including command-line operation, reserved keywords, and complete syntax description.

apachetop

apachetop is a simple live monitor for Apache servers. It reads your Apache logs and displays updates in realtime. I use it as a fast easy debugging tool. You can test different URLs and see the results immediately: files requested, hits, and response times.

$ apachetop
last hit: 20:56:39         atop runtime:  0 days, 00:01:00             20:56:56
All:           12 reqs (   0.5/sec)         22.4K (  883.2B/sec)    1913.7B/req
2xx:       6 (50.0%) 3xx:       4 (33.3%) 4xx:     2 (16.7%) 5xx:     0 ( 0.0%)
R ( 30s):      12 reqs (   0.4/sec)         22.4K (  765.5B/sec)    1913.7B/req
2xx:       6 (50.0%) 3xx:       4 (33.3%) 4xx:     2 (16.7%) 5xx:     0 ( 0.0%)

 REQS REQ/S    KB KB/S URL
    5  0.19  17.2  0.7*/
    5  0.19   4.2  0.2 /icons/ubuntu-logo.png
    2  0.08   1.0  0.0 /favicon.ico

You can specify a particular logfile with the -f option, or multiple logfiles like this: apachetop -f logfile1 -f logfile2. Another useful option is -l, which makes all URLs lowercase. If the same URL appears as both uppercase and lowercase it will be counted as two different URLs.

Supervisor

Supervisor is a slick tool for managing scripts and commands that don’t have init scripts. It saves you from having to write your own, and it’s much easier to use than systemd.

On Debian/Ubuntu, Supervisor starts automatically after installation. Verify with ps:

$ ps ax|grep supervisord
 7306 ?        Ss     0:00 /usr/bin/python 
   /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf

Let’s take our Python hello world script from last week to practice with. Set it up in /etc/supervisor/conf.d/helloworld.conf:

[program:helloworld.py]
command=/bin/helloworld.py
autostart=true
autorestart=true
stderr_logfile=/var/log/hello/err.log
stdout_logfile=/var/log/hello/hello.log

Now Supervisor needs to re-read the conf.d/ directory, and then apply the changes:

$ sudo supervisorctl reread
$ sudo supervisorctl update

Check your new logfiles to verify that it’s running:

$ sudo supervisorctl reread
helloworld.py: available
carla@studio:~$ sudo supervisorctl update
helloworld.py: added process group
carla@studio:~$ tail /var/log/hello/hello.log
Hello World!
Hello World!
Hello World!
Hello World!

See? Easy.

Visit Supervisor for complete and excellent documentation.

Improving Security Through Data Analysis and Visualizations

We’ve all heard the saying that a picture is worth a thousand words. When done properly, visualizing data enables people to see relationships and patterns in their data that they might never see, or alternately would take them a very long time to uncover. Visualizing data also enables humans to process exponentially more data than would ever be possible simply by just looking at the raw numbers.

Ultimately, using effective data visualizations will enable a security analytics program to derive much more value from the data. There’s a term I really love that I believe was coined by Bill Franks, the Chief Analytics Officer of Teradata, which is time to insight (TTI). TTI is a measure of how long it takes to go from raw data to something of value. It is important, especially in security, because in the security realm, insights’ value decreases over time. Effective visualization can dramatically decrease the TTI and thus improve your organization’s response time and increase the value of insights and analytic efforts.

Read more at O’Reilly

How I Learned Go Programming

Go is a relatively new programming language, and nothing makes a developer go crazier than a new programming language, haha! As many new tech inventions, Go was created as an experiment. The goal of its creators was to come up with a language that would resolve bad practices of others while keeping the good things. It was first released in March 2012. Since then Go has attracted many developers from all fields and disciplines.

During the second quarter of this Year, I joined ViVA XD, an In-flight entertainment platform changing the world one trip at a time. We chose Go to build the whole infrastructure. Go was/is still an awesome choice for us in terms of performance, and its fast and simple development process.

In this article, I’ll give a short introduction of the Go language and it would serve as a motivation for further reading.

Read more at Dev.to

The DevSecOps Skills Gap

Few enterprise IT trends have evolved from buzzword to must-have as solidly as DevOps. Virtually everyone agrees that a software development and delivery process that bridges the traditional gap between dev teams and operations professionals is a good thing for the enterprise, an approach that is almost certain to deliver software faster and more reliably.

And yet, the results of a just-published survey(“DevSecOps Global Skills Survey: Trends in training and education within developer and IT operations communities”) suggests that the rush to adopt DevOps practices might be leading enterprises to an insecure place.

Sponsored by application security firm Veracode and DevOps.com, a site dedicated to DevOps education and community building, the survey of IT professionals uncovered the disturbing fact that “developers today lack the formal education and skills they need to produce secure software at DevOps speed.”

Read more at ADT Mag

Changing the World with the Power of Cognitive Computing

In his keynote at Open Source Summit in Los Angeles, Tanmay Bakshi will talk about how he’s using cognitive and cloud computing to change the world through open source initiatives, including “The Cognitive Story,” which is aimed at augmenting and amplifying human capabilities. Through this project, Bakshi is working to decipher brain wave data through AI and neural networks and provide the ability to communicate to those who cannot communicate naturally.

Bakshi is a software and AI/cognitive developer, author, algorithm-ist, TEDx speaker, IBM Champion for Cloud, and Honorary IBM Cloud Advisor. He also hosts the IBM Facebook Live Series called “Watson Made Simple with Tanmay.”

At age 13, Bakshi is on a mission to reach and help at least 100,000 aspiring beginners learn how to code, by sharing his knowledge through his YouTube channel “Tanmay Teaches” and through his books, keynotes, workshops, and seminars. Here, Bakshi shares more about his work and his upcoming keynote.

Linux.com: Can you tell us about how you are involved with open source? What are some projects that you maintain or have founded?

Tanmay Bakshi: I am a huge supporter of open-source code and technology. I have founded open source projects that I actively maintain. One of these projects is AskTanmay, an open source web-based Natural Language Question Answering (NLQA) System, which was one of my very first Watson projects.

I also have a YouTube channel called Tanmay Teaches, where I love to share my knowledge about topics like computing, programming, algorithms, Watson/AI, machine learning, math, and science. When I find something I think the community needs to know about, I create a tutorial, build the entire application, and explain and open source the project on GitHub. To date, I have 144 videos and counting.

Another project I’m closely involved with, which will touch a lot of people’s lives, is “The Cognitive Story.” It’s a project that I’m a part of, and it uses artificial intelligence in a field where I believe it can make the most impact — healthcare. The point of The Cognitive Story is to augment people’s lives using the power of cognitive computing and AI. This is a completely open source project, and anyone is welcome to take help from this project and also contribute to its common cause.

Furthermore, the reason I’m so passionate about open source is that it’s one of the ways through which I can share my knowledge. Lots of people reach out to me with their problems and questions that they have about coding and technology. When a project is open source, nobody needs to “rediscover fire” or “reinvent the wheel” — they’re not spending time rebuilding a base that’s already been built. They’re working on top of the base to create even better software that can benefit the community.

That is the main reason I love Linux, and at Open Source Summit North America, I look forward to connecting with more supporters of open source.

Linux.com: How are you involved with these various projects?

Bakshi: DeepSPADE is one of my most recent AI projects, and I’m very excited about it — the basic point of DeepSPADE is to detect spam on public community websites and automatically report it to the people who can take care of it. It uses a very deep Convolutional Neural Network (CNN) + Gated Recurrent Unit (GRU) model to achieve this. You can find out more about it on a blog that I wrote.

AskTanmay was my very first Watson project, and it’s an NLQA system that can answer natural language questions. It uses a combination of IBM Watson’s NLU and NLC services with BiDAF (Bi-Directional Attention Flow) to understand online resources to answer your questions. This open source code is available on GitHub.

The first chapter in The Cognitive Story (TCS) is to help those with special needs and disabilities. Our very first goal here is to help a quadriplegic girl who lives north of Toronto, and her name is Boo. She’s unable to communicate or express herself in any way — and only her mom can understand the very broad concepts she tries to convey, which is why we’ve given her mom the title of “Intimate Interpreter.” My role in TCS is to implement deep learning systems to understand Boo’s EEG brain waves and decipher them into what she’s trying to communicate. The project is open source and is available on GitHub.

Linux.com: What’s the common theme among these projects?

Bakshi: Whether it be (a) trying to reduce the time it takes to research something, (b) allowing website users to have a better experience, or (c) allowing those who can’t communicate naturally to communicate via AI, the commonality is that I want to share my knowledge through these open source projects. We are at a point in time where conventional computing alone is not able to help us. As an Open Source Community, we need to build and provide tools in the hands of those working in healthcare, security, agriculture, science, education, etc., so that they can do their work better and the entire community can benefit. All these projects use machine learning to make people’s lives easier and better to live.

Linux.com: What is going to be the core focus of your talk at Open Source Summit?

Bakshi: In my talk, I’ll primarily urge everyone to understand the importance of open sourcing AI technology. Since AI is still an evolving technology, yet already such an integral part of our lives, there’s a need to expand this technology at an even more rapid pace through the power of open source – we’re only holding back our own progress by keeping our code to ourselves.

Linux.com: You are also hosting a Birds of a Feather session at OS Summit. Can you tell us a bit about it?

Bakshi: In my BoF talk, I will take a deep-dive into the working of the DeepSPADE system: why it’s structured as it is and the logic behind the model. I’ll also talk about the evolution of the model, and why I chose the CNN+GRU method.

Linux.com: Who should be attending your talk?

Bakshi: I’d recommend my keynote to machine learning beginners/experts, and those who are curious as to how the power of AI and ML can not only change but also augment their lives and amplify their skills. I’d recommend my BoF talk to all those who have used machine learning before, or are machine learning experts, and who are interested in how and why DeepSPADE works.

Check out the full schedule for Open Source Summit here. Linux.com readers save on registration with discount code LINUXRD5Register now!