Learn

Learn about latest technology

Build

Unleash your talent and coding!

Share

Let more people use it to improve!
 

Kubernetes from scratch for developers

miércoles, 28 de noviembre de 2018

Trying to explain Kubernetes from scratch I bring you here a guide about how make a very fruitful study of it. This thoughts are from a software developer perspective. It's not complicate and is quite well explained in Kubernetes(k8s) documentation.

My FIRST ADVICE is before beginning the study of k8s: you should study in-depth about any application container engine, I suggest docker but there are many others like rkt, runc or any other that follows OCI  specifications.
If you feel comfortable in Docker, keep going, in any other situation you will need to get some skills, it can be done by yourself. As a developer it won't be too complicated:
  • the documentation site can guide you with several examples and practical tasks.  
  • docker free lab is very useful too, easy and intuitive and mainly it encompasses the whole learning process in Docker. 
ONCE that we know at least the most important  aspects related with container engine let’s start.

At this point should be obvious for everyone concepts like Cluster or Node
So first things to know:
  1. What k8s really is? It is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance and scaling of applications. 
  2. The components of k8s 


















The previous pictures show us the basic infrastructure BUT where are the containers?. Our target is to deploy and finally run applications that are living in our containers. Our containers are living into the k8s PODs. These PODs are the basic building block in k8s and the most important aspects in this process are our applications hosted in our containers and living in those PODs at the same time.

Because our target is to deploy and finally run applications this is the whole picture that we have at this point:

A Cluster: that contains NODEs [1 Master - 1.. n Worker]
Worker Nodes: Contain : PODs
PODs: Contain: Containers
Containers: It contains applications with their own environments.

































In the previous picture there are some elements that I haven't mentioned yet (volume), I will talk about it in next posts but you can get the idea that the POD's volume is similar at docker container volume in Docker.
These PODs are living in NODEs, but which NODEs? Who decides which node spin up an specific POD?
If we take a look at the Master Node components we can see kube-scheduler, it is responsible for creating a POD in a specific node. The kube-scheduler watches newly created PODs that have no node assigned, and selects a node for them to run on, this process is not so simple, many factors taken into account for scheduling decisions include individual and collective resource requirements. In next posts we'll see that scheduler behavior can be changed and how some times you can choose in what node your POD can spin up.

There are several way for POD creations BUT all of them involve a reference to  images running on containers.
From the developer’s point of view is too easy understand k8s PODs creations process, it has two main components:
  • controller
  • custom resource definition
and third aspect is how can we talk to the cluster, k8s lets to do that by different ways:
So it is up to you, if your are familiar with RESTful environment you can access through the k8s REST API. The API is documented through Swagger so you will have an interface or apply a curl with the proper command.

In this case I am going to use the command line interface (kubectl) that  will be running commands against Kubernetes cluster.

kubectl syntax: kubectl [command] [TYPE] [NAME] [flags]

kubectl + creations commands + definition document 

Every time that the above statement take place the object defined in our definition document must be validated by kube-apiserver before configure data to any api objects(pods, services, replicationcontrollers, and others)

creations commands(to create api objects):
definition document: a custom resource definition[yaml or json document] that should describe a Controller used during the creation process. In our case our target is the POD creation.
In k8s, a Controller is a control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state in this case we are talking about our desired state, it will be a POD.

Let's see some definition document examples and its relations with the creation commands

# nginx-deployment.yaml document describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. 

create -f https://raw.githubusercontent.com/ldipotetjob/k8straining/develop/yamldefinitions/nginx-deployment.yaml

# nginx-repcontroller.yaml describe a ReplicationController object

create -f https://raw.githubusercontent.com/ldipotetjob/k8straining/develop/yamldefinitions/nginx-repcontroller.yaml

Some yaml definitions documents examples

Every thing done here is from a software developer perspective so in the same way when we are going to implement our project:
  1. Local projects/environments: means create/run everything in local. 
  2. Production projects/environments: means upload to cloud environment created by ourselves or by a certified provider. 
For implement Local projects/environments previously indicated we have several options:
  1. the online training platforms[katacoda], some time run too slow but if fine just for test.
  2. install a local environment, it is very easy with minikube. It is a configurable environment that let you a simulation of a k8s cluster.
  3. install k8s in your bare metal/laptop. 
I want to give you some interesting and important reference links for the work with k8s:
  1. whole references of kubectl command line [you can find here examples about how to use a kubectl command with its flags and arguments]
  2. kubectl Cheat Sheet
Is very important to know that as a developer docker & kubernetes must be familiar for us. Do not try to memorize commands or sintaxis. Try to understand how the whole system works, the idea. Dive in K8s documentation in an ordered way. The following topics are a very good starting point:
  1. Kubernetes architecture 
  2. Containers 
  3. Workloads(Pods and Controllers)

The aforementioned aspects are physical and logical objects, to enable that every thing can work together I will need some abstract aspects for example services. In next posts I am going to review in-depth k8s services and its relationship with the rest of k8s elements.

Making available an Unscanned Cassandra image from the official docker file

miércoles, 21 de noviembre de 2018

I want to make here a constructive criticism, make thing easy please to all companies that work in open source environment. We make to many thing for free but that not mean that it shouldn't be almost(not completely) perfect.

The problem to solve in this post is that when I want to create a Cassandra ver 3.7 Image from the Oficial Docker file, the process fail and when I take a look in internet too many people are facing the same problem. So the docker file is wrong in that version an in others.

I have tested Cassandra Docker Hub repository and its true that we can build our images from his sources docker files BUT quite well just only few of them (see image below).

If you want to pull Cassandra image and spin up containers it can be done with almost all Cassandra versions. You can take a look, there are Scanned and Unscanned images and if you try to build images from source and the tag that you want to build image from is an Unscanned images perhaps you will face too many troubles.

I gonna try with Cassandra 3.7 and the first thing that I am going to do is follow the instructions described in the Cassandra github repository about its docker file template and surprisingly we get an Error or even using the dockerfile Cassandra version  3.7 we face the same error, see image below.


So in a best case when I construct the image changing the Debian's suite  I have to refactor it because the image can be can be built it BUT  the cqlsh tool cannot be accessed, see image below.


You can dive into github post  that explain how other Unscanned  Cassandra versions are facing the same problem when you try to create image from docker file.

Any way is frustrating after to many hours trying, so finally we refactored the image for solve the problems with openjdk-8-jre-headless adding to the official image the following snapshot of code:

RUN { \
  echo 'Package: openjdk-* ca-certificates-java'; \
  echo 'Pin: release n=*-backports'; \
  echo 'Pin-Priority: 990'; \
 } > /etc/apt/preferences.d/java-backports

You can find the solution for several version of cassandra docker files images in case that you want to build images from source in Cassandra version (3.9 - 3.10) that's the code that I have indicated previously.

You can find all the source code of Cassandra's image ver 3.7 in my github repository. We have tested it but if you have any problems please send us a feedback.

For many reason we should need to build an Official image from its sources for example is we need to add a layer to the image for Liveness and Readiness Probes in case of we want orchestrate our container with Kubernetes, it is not an obligation it is just a good practice. You can face the same situation in Docker when you are defining your Docker Compose yaml file. In next posts I am going to explain how deal with it.





Scheduling in Akka with Quartz - a wise solution

lunes, 24 de septiembre de 2018

My problem arise when I needed to execute several task following specific schedule (day/time) indefinitely for long term. This tasks should have different patterns and the schedule will vary definitely in time, the task that we execute today will be different than the task that I am going to execute next week indeed and the same with its schedule.

So I am going to tell you my environment and what I want:
  • A process like linux cron services for running commands following pre-determined schedule.
  • Objects that should be able to process configuration files that contain lists of command lines and when its should be invoked.
  • Objects that should be able to process config files. This config files contain lists of command and when that can be invoked(in time).
  • Some of the aforementioned configurations files must be modified dynamically. That means without stop our application.
Why do not use linux cron + crontable and execute shell scripting files:
  1. For its configuration. 
  2. For the scalability that we need.
  3. For the shell scripting complexity that we ought need (in my real life I used to need data base connections, parse thousands urls, parallel processing on net, etc).
  4. It is our OS configuration task.
All of this in an environment where we need an intensive use of concurrency. So I have to deal with this scenario but akka libraries do not have solution for it. So the are several options on the market, some of them:
  1. Apache Camel Timer
  2. Quartz
  3. Akka-Quartz Scheduler
So in my case I going to use the last one, Akka-Quartz Scheduler, there are several reason, Apache Camel in my opinion is too complicated for just for use his Timer and related to Quartz have the same problem and too oriented to java community, so every kind of listener has to be implemented and we need to work in an akka environment.

I will not tell you here what is better o worst option for my “akka cron environment”. Scheduler and Timers are a very complicated aspects in programming so if you want a right opinion you will need a deep benchmarking between the three of them. It is not the intention of this document.

What are we going to do in our example:
  1. We need to execute several task following specific schedule (day/time) indefinitely until something goes wrong.
  2. We should  change the aforementioned schedule, if we need, for any other. All without stop any of our actors that are busy executing our task.
It is up to you don't change the scheduler with PAST dates.I do not make any validation about the date that you configure in your schedule.

As  Akka-Quartz Scheduler explain, its goal is to provide Akka with a scheduling system that is closer to what one would expect for Cron type jobs.

This is the example that I bring to readers today:

We have a platform whose target is to collect information from a web page that publishes everything related to movies. At the same time I know that every month is published the information about when(in the time) will be added new pages with the info related to new movies. All our code is based on AkkaScheduler module on my github, it is a multi-module project so you can follow the instruction indicated in the repository if you want to deploy it. 

Our project has a very  important aspect:

Configuration files: I have 2 configuration files.
One of them will indicate me that every month I have to do something, this configuration is internal and will be in an internal file. In our case will be in the akka configuration file(application.conf) on my github :

1
2
3
4
5
6
7
8
9
  quartz {
   defaultTimezone = "Europe/London"
   schedules {
    moviepages {
     description = "A cron job that fires off every month"
     expression = "0 30 2 2 * ? *"
    }
   }
  }
code 1.0

See this reference to the CronExpression. Because the info is published every first day in the month I  will check at the beginning of the second day every month. So this is an internal configuration ref: code 1.0 because it never change(I have to review every month for a schedule about the pages related to new films that will be added or updated)

The Second of them will indicate me when will be updated each page that has the information, during the whole month. This will be an external configuration file that indicate when will be updated or added any new page throughout the month.  So that day that the page be updated or added our actor will collect information about it. Every month this external file has to be changed with another new schedule.
An example of my external file (cronmoviepages.confthat you can find on my github is below:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
## https://github.com/lightbend/config/blob/master/HOCON.md
## cronexpresssion = "Seconds Minutes Hours Day-of-month Month Day-of-Week Year(Optional)"

schedule {
  defaultTimezone = "Europe/London"
  moviepagemejorenvo = [
    {
      cronexpresssion = "0 4 2 20 9 ? *"
      moviepage = "40"
    }
    {
      cronexpresssion = "0 4 2 20 9 ? *"
      moviepage = "41"
    }
  ]
}
code 1.1

In code 1.1 cronexpression means when the actor has to update the associate page from the movie website. Internally the software will group all moviepage with the same cronexpression and will fire the jobs with each common group.

We have two main cores:
  • For initialize the process BootNotScheduled . In this case we know what pages we want to update so we do not use the schedule.
  • For start the schedule process BootScheduled . This is main program that fire the scheduler. 
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
object BootScheduled extends App {
  //https://github.com/lambdista/config
  val system = ActorSystem(ConstantsObject.ScheduledIoTDaemon)
  // This actor will control the whole System ("IoTDaemon_Scheduled")
  val daemonScheduled = system.actorOf(IoTAdmin.props, ConstantsObject.ScheduledIoTDaemonProcessing)
  // This actor will control the reScheduling about time table for update new pages with films in original version
  val reeschedule = system.actorOf(ReSchedulingJob.props, ConstantsObject.DaemonReShedulingVoMoviePages)
  //Use system's dispatcher as ExecutionContext
  //QuartzSchedulerExtension is scoped to that ActorSystem and there will only ever be one instance of it per ActorSystem
  QuartzSchedulerExtension(system).schedule("moviepages", reeschedule, FireSchedule(daemonScheduled))
}
code 1.3

In the previous code 1.3 on line 10 It will use the internal configuration from our application.conf ref. code 1.0. At the same time this line is responsible of every second day of the month at 2.30 am fire a job that will read the schedule a new scheduler  ref. code 1.1(cronmoviepages.conf), the external file that can be read periodically and it indicates what jobs be re-scheduled every time with the new schedule.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
.....................

      cronExpreesionMatches.foreach(
        cronExpreesionMatch => {
          val (cronExpreesion, listOfMoviePages) = cronExpreesionMatch
          log.info(
            "Programming Schedule for cronexpression: {} including matches: {}",
            cronExpreesion,listOfMoviePages.mkString("-")
          )
          val headList = listOfMoviePages.head

          // TODO it is important to kill all Scheduled jobs that has been created ONCE time that the work has done

          /** This code generate a warning because the reschedule job never exist
            * any more because headList used for named it never is the same*/
          QuartzSchedulerExtension(system).rescheduleJob(
            s"Movie-Page-Scheduler$headList",
            scheduledDaemon,
            ParseUrlFilms(listOfMoviePages),
            Option("Scheduling "), cronExpreesion, None, defaultTimezone)
        }
      )
      log.info("schedule get new MoviePages")
  }

.....................
code 1.4

In the previous code 1.4 on line 16 we re-schedule an actor (IoTAdmin) to be fire following the external configuration and then it will launch concurrently so many actors(ProcessMoviePage) as many pages need to be updated. You can appreciate that in the code below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
........

class IoTAdmin extends Actor with ActorLogging with MessageToTargetActors{
  override def preStart(): Unit = log.info("Start process in Akka Scheduler Example")
  override def postStop(): Unit = log.info("Stop process in Akka Schedule Example")
  // TODO checking perfromance: http://docs.scala-lang.org/overviews/collections/performance-characteristics.html

  var watched = Map.empty[ActorRef, Int]

  override def receive = {

    case ParseUrlFilms(urlPageNunmList, optActorRef) =>

      urlPageNunmList.foreach(
              urlpagenumber => {
                val currentActorRef: ActorRef = context.actorOf(ProcessMoviePage.props)
                watched += currentActorRef -> urlpagenumber
                // watcher for every actor that is created 'cause the actor need to know when the process have finished
                context.watch(currentActorRef)
                // TODO urlspatterns<=varconf
                val processUrlFilmMessage = ProccessPage(
                  s"http://mejorenvo.com/p${urlpagenumber}.html",
                  optActorRef.fold(self)(ref=>ref)
                )
                sendParserUrlMessage(currentActorRef, processUrlFilmMessage)
              })
.............
code 1.5

Every specific day that was configured in cronmoviepages.conf file the IoTAdmin actor will receive a ParseUrlFilms message to process the new page that should be updated.

This post and the project can be a base or skeleton for you if you want to create a process with the following specifications:
  • If you need to execute scheduled task and these tasks occur at a specific moment in time and not need to be executed periodically.
  • If you  need to execute several task following a customizable schedule (day/time), indefinitely until something goes wrong.

This is the idea of our cron but more in deep because some time we need to do more complex thing, external to our OS.
I have used Akka-Quartz Scheduler with docker container and it works pretty well. I have NOT tested the use of this libraries in an Akka Clustering Environment with the complexity that these kind of implementations entail. You can get access to Akka Documentation about  to try of configure seed nodes on any PaaS and run it. You can do it manually but when you are working on a PaaS you need to do it automatically.  The explanation in Akka Library documentation and specifically the reference to Cluster Bootstrap was not working at the time of writing this article. I will try to explain and implement it in next post.

Relationship between docker-cqlsh-cassandra

martes, 17 de julio de 2018


We have the most simplest architecture, two docker images/containers.  One of them with a Cassandra db and the other one contain an application(in my case Scala-Akka apps) could be any apps in any language but with ONE important exception our apps are using in his processes the CQLsh tool and specifically CQLsh tool - COPY FROM/TO.
zzz
In the above structure when the problems arise? The Application hosted in Container - II have some scripts and functions that make use of CQLsh tool for IMPORT FROM CVS file to Cassandra db(Container - I).

The main aspects in our lab:

Cassandra versions tested for us(From Official Repository)(Container - I):
  • 3.11.1
  • 3.7
  • 3.1.1
  • 3.2
CQL version: 3.4.+(Container - I)(Container - II)

When CQL version is not compatible you can add the following parameter( it is not recommended ):
  • cqlversion=Change cqlversion and use the  Database's cqlversion you want connect to
containerId=Container Id where is hosted Cassandra Database

We need to know from where is listening Cassandra (listen_address)  ref. Cassandra configuration file:
  • container_ip_address_where_is_cassandra=sudo docker inspect containerId -f '{{.NetworkSettings.IPAddress}}'
You ought define container_ip_address_where_is_cassandra, Cassandra listen_address  property as a OS Environment variable.

cqlsh --cqlversion=cql_version_compatible  listen_address

You can use the default port(9042) or any other if you change the configuration in cassandra.yaml

And the proper Docker File for CQLsh installation in our Container - II ought be:

1
2
3
4
5
6
7
8
..........
# Installing python
RUN apt-get update; \
  apt-get install -y python curl
# Installing cqlsh
RUN bash -c "python <(curl https://bootstrap.pypa.io/get-pip.py)"; \
  pip install cqlsh
...........
code. 1

The above(code. 1) is an snapshot of the Docker file part that make the cqlsh installation.

What can be done with this installation:
- create keyspaces
- create tables
- select
 
But the above installation does not encompass TWO important commands:

- COPY TO
- COPY FROM

When database is in different servers you use to get this mistake:

<stdin>:2:'module' object has no attribute 'parse_options' failing with the parse_options

I have analyze both source code(CQLsh from Cassandra and CQLsh from Python pip installation[pip in code. 1] ) and are different as well as the way that each code tackle the parse_options. You can have a look in CqlSh pip installation source code and the Cassandra cqlsh tool in your Cassandra container in  /usr/bin/cqlsh.py.

That is why if we need to make a COPY FROM/TO with CQLsh tool above solution (code. 1) It is NOT allowed.

There are several solutions for the above problem one of them is to install the CQLsh that come with Cassandra Official Image. It will be necessary only if I need to use CQLsh - COPY FROM/TO in any other case cqlsh via pip installation (code. 1) in image/CONTAINER II is good enough.

So My CQLsh Tool installation is from Cassandra Installation(ref. Cassandra versions tested):

Where the libraries in the Cassandra installation ver 3.7 - 3.11 can be found:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
# cassandra download and run cassandra image 
docker run --name cassandra_image_37 -d cassandra:3.7

# copy python libraries from cassandra container
sudo docker cp cassandra_image_37:/usr/lib/python2.7 .

# cassandra libraries from cassandra container
sudo docker cp cassandra_image_37:/usr/share/cassandra .

# cqlsh python src and phyton shell script
sudo docker cp cassandra_image_37:/usr/bin/cqlsh .
sudo docker cp cassandra_image_37:/usr/bin/cqlsh.py .

# python bash from cassandra container
sudo docker cp cassandra_image_37:/usr/bin/python2.7 .

# open ssl libraries from cassandra container
sudo docker cp cassandra_image_37:/usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 .

# cripto libraries from cassandra container
sudo docker cpcassandra_image_37:/usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 .

Project for create the image with java base + scala and CQLsh tools for execute COPY FROM or COPY TO can be found in my github project.

These architecture is valid only for images that are living in the same Docker. In the same way we run containers  in the same Docker which is very easy to inspect. You can include Docker Machine and config the docker-compose.yml properly if you want to work with more replicas. Another different thing is when we are talking about orchestration (swarm, kubernetes, etc) and the communication between different machines/pods.




Testing RESTful PlayFramework applications

miércoles, 30 de mayo de 2018

As was  argued by Robert C. Martin in his Clean Code bible, there are several aspects that we have to pay attention when we design our tests:
       - Minimize the number of assert per concept.
       - Test just one concept per test function.
       - Test should not depend on each other. 
       - Test should run in any environment. 
Test should pass or fail BUT you should not make process or any other thing for check if the Test was OK or failed.
Write the TEST first OR with the code, NOT after it be running in a production environment.

I am going to talk about Test Implementation in Scala and more specific in Scala  Playframework applications. If you come from Java programming language perhaps the more easy way is by ScalaTest - FlatSpec, it is a good starting point.

So when we are trying UNIT TEST in Play Framework, our minimal tests should include the followings artifacts:
  • Controller
  • Actions
  • Routes
  • Our services or functionalities, obviously
For make the previous topics more easy I have some IMPORTANT tips:
  1. Encapsulate your operations in services.
  2. Inject the classes that manage the above services.
  3. Create the relationship Class - Trait. It will let you create binding in your tests with Guice in a more easy way and at the same time get the benefits already known of this programming practice.
The main ideas are:
  • Mock services that you don't want to test. Your tests must be focused as much as you can in the functionalities that you want try.
  • Test the Controller.
  • Test the actions: When we talk about test actions that means test our action builders,  test the definitions that process every path in each request. A good idea is that action builders can process all related with our request as for example content-type. It will let us  filter for an expected Content-Type request and if it isn't what we expect stop the processing.
  • Test the routes: It is important to know that all routes defined for us are working properly or at least can be reached by the proper path.
  • Test any other Function or Service related with HTTP request. For example methods/functions related with content negotiation normally need make use of HTTP Header and in that case we will need set the header in a Fake-request.
  • Test services functionalities. I  won’t include anything related with HTTP request here. Just our own functionalities.
In an "Ideal Hello World Play Framework application" EVERY THINGS is very simple BUT the problems arise in Real World application with escenarios like that:
  • Multimodule applications.
  • Several route files.
  • Several bindings class that are injected in Singleton Class.
  • Class referencing configurations files.
  • And for example you have bindings with Class that inject other Singleton class and that Class at the same time have some references to configuration files in your Project. 
I will talk here about testing problems that are not clearly treated in Play documentation and how tackle it. When you are in real life with play framework applications is when you understand how tedious is mock a binding in Guice when you trying to create Fake apps for Testing.
It is more easy MockitoSugar in this context and could be better if when you mock any component  in Guice we could have the MockitoSugar concept instead of having to create a Well Mocked Object and binding it when you try to make an App via Guice Builder.

A design made for an easy/smart test:

I will not tell here why is so important dependency injection, you can find that information in every where even in playframework documentation.

In the following example I will show a controller that manage the actions:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
class FootballLeagueController @Inject()(action: DefaultControllerComponents, services: TDataServices) extends BaseController with ContentNegotiation {

  /**
    * Best way, using a wrapper for an action that process a POST with a JSON 
    * in the request.
    *
    * @see utilities.DefaultControllerComponents.jsonActionBuilder a reference 
    *      to JsonActionBuilder a builder of actions that process POST request
    * @return specific Result when every things is Ok so we send the status 
    *         and the comment with the specific json that show the result
    */

    def insertMatchWithCustomized = action.jsonActionBuilder{ implicit request =>
    val matchGame: Match = request.body.asJson.get.as[Match]
      processContentNegotiationForJson[Match](matchGame)
  }

// ..............

  def insertMatchGeneric = JsonAction[Match](matchReads){ implicit request =>
    val matchGame: Match = request.body.asJson.get.as[Match]
    processContentNegotiationForJson[Match](matchGame)
  }

// ..........

  def getMatchGame = action.defaultActionBuilder { implicit request =>
    val dataResults:Seq[Match] = services.modelOfMatchFootball("football.txt")
    proccessContentNegotiation[Match](dataResults)
  }

..............

}
code_example 1 ref. source code Controller

Some of my recommendations for  the real life Controllers:
  • I mostly need a custom action so inject a component [ref. code_example 1 line 1] in the previous snapshot of code [action: DefaultControllerComponents]) and this CUSTOM action will simplify the processing of your requests in Play and at the same time this design will make easy your test, something very important in our TDD process as we will see later.
  • Encapsulate the operations that must be done in your actions in services and INJECT it in your controller([ref. code_example 1 line 1[services: TDataServices]), some tips here: Inject a trait and bind that trait to a concrete class both things are truly important.  If any service include for example any customizable object like a database connection do NOT inject at this level do it in the concrete class. 
In our Controller [ref. code_example 1we are going to Test first the Controller:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
/**
  * Created by ldipotet on 23/09/17
  */
package com.ldg.play.test


import com.ldg.basecontrollers.{DefaultActionBuilder, DefaultControllerComponents, JsonActionBuilder}
.......
import com.ldg.model.Match
import com.ldg.play.baseclass.UnitSpec
import controllers.FootballLeagueController
import org.mockito.Mockito._
import org.scalatest.mock.MockitoSugar
import play.api.test.FakeRequest
import play.api.test.Helpers._
.......
import services.TDataServices


class FootballControllerSpec extends UnitSpec with MockitoSugar {

  val mockDataServices = mock[TDataServices]
  val mockDefaultControllerComponents = mock[DefaultControllerComponents]
  val mockActionBuilder = new JsonActionBuilder()
  val mockDefaultActionBuilder = new DefaultActionBuilder()
........
  val TestDefaultControllerComponents: DefaultControllerComponents = DefaultControllerComponents(mockDefaultActionBuilder,mockActionBuilder,/*messagesApi,*/langs)
  val TestFootballleagueController = new FootballLeagueController(TestDefaultControllerComponents,mockDataServices)

  /**
    *
    * Testing right response for acceptance of application/header
    * Request: plain text
    * Response: a Json
    *
    */

  "Request /GET/ with Content-Type:text/plain and application/json" should
    "return a json file Response with a 200 Code" in {

      when(mockDataServices.modelOfMatchFootball("football.txt")) thenReturn Seq(matchGame)
      val request = FakeRequest(GET, "/football/matchs")
        .withHeaders(("Accept","application/json"),("Content-Type","text/plain"))
      val result = TestFootballleagueController.getMatchGame(request)
      val resultMatchGame: Match = (contentAsJson(result) \ "message" \ 0).as[Match]
      status(result) shouldBe OK
      resultMatchGame.homeTeam.name should equal(matchGame.homeTeam.name)
      resultMatchGame.homeTeam.goals should equal(matchGame.homeTeam.goals)

    }

  /**
    *
    * Testing right response for acceptance of application/header
    * Testing  template work fine: Result of a mocked template shoulBe equal to Res
    * Request: plain text
    * Response: a CSV file
    *
    */

  "Request /GET/ with Content-Type:text/plain and txt/csv" should "return a csv file Response with a 200 Code" in {
    when(mockDataServices.modelOfMatchFootball("football.txt")) thenReturn Seq(matchGame)
    val request = FakeRequest(GET, "/football/matchs").withHeaders(("Accept","text/csv"),("Content-Type","text/plain"))    val result = TestFootballleagueController.getMatchGame(request)
    val content = views.csv.football(Seq(matchGame))
    val templateContent: String = contentAsString(content)
    val resultContent: String = contentAsString(result)
    status(result) shouldBe OK
    templateContent should equal(resultContent)
  }

  "Request /GET with Content-Type:text/plain and WRONG Accept: image/jpeg" should
    "return a json file Response with a 406 Code" in {
    val result = TestFootballleagueController.getMatchGame      .apply(FakeRequest(GET, "/").withHeaders(("Accept","image/jpeg"),("Content-Type","text/plain")))
    status(result) shouldBe NOT_ACCEPTABLE
  }
}
code_example 2 ref. source code ControllerSpec 

So if I am planning to test my Controller and if at the same time it is being injected by several component, the first thing that I need to do is give value to those components and because our target here are NOT the components I will mock them! [ref. code_example 2 line 22 - 25]:

  val mockDataServices = mock[TDataServices]
  val mockDefaultControllerComponents = mock[DefaultControllerComponents]
  val mockActionBuilder = new JsonActionBuilder()
  val mockDefaultActionBuilder = new DefaultActionBuilder()

TDataServices is a trait so we need binding it to a Concrete class, that is a very good practice. As the play specification indicates: by default Play will load any class called Module that is defined in the root package (the "app" directory) or you can define them in any place and indicate where to find it in the play configuration file(application.conf) under play.modules configuration value.

..............
class Module extends AbstractModule {
   ...................
   bind(classOf[TDataServices]).to(classOf[DataServices])
      .asEagerSingleton()
  }
}
code_example 3 ref. source code bindings

You can find more information in Play Documentation about custom and eager bindings. You should pay attention about this, it is important for several practices and in our case we will see the benefits of it in Testing.

I recommend Test with Guice only when we do not have any other choice. That is why if I need to inject a data base connection, for example, I wouldn't do it in the Controller it is better do it in the service class indeed.

Our first Test Testing Controllers begin in [ref. code_example 2  line 38 - 48] :

I need to simulate a behaviour so if any process call  services.modelOfMatchFootball[ref.code_example 1 line 27] I will return Seq(matchGame):
when(mockDataServices.modelOfMatchFootball("football.txt")) thenReturn Seq(matchGame)
in my FootballControllerSpec.
In the same way I will mock the controller because I want to inject all mocked component too and test the flow of the controller. Remember that we are in [ref. code_example 2  line 38 - 48] so I have a Fake request , something important is that we can add to this request a header, body(json, text, xml), cookies and whatever that any request has. Test will execute Controller but with mocked components and will process everything and return a response in the same way:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
.........................      
when(mockDataServices.modelOfMatchFootball("football.txt")) thenReturn Seq(matchGame)
      val request = FakeRequest(GET, "/football/matchs")
        .withHeaders(("Accept","application/json"),("Content-Type","text/plain"))
      val result = TestFootballleagueController.getMatchGame(request)
      val resultMatchGame: Match = (contentAsJson(result) \ "message" \ 0).as[Match]
      status(result) shouldBe OK
      resultMatchGame.homeTeam.name should equal(matchGame.homeTeam.name)
      resultMatchGame.homeTeam.goals should equal(matchGame.homeTeam.goals)
..........................

If  TestFootballleagueController.getMatchGame method had any parameter then the statement would be TestFootballleagueController.getMatchGame(anyparam)(request)[ref.  line 5 previous code]. In our example I have to deal with content-negotiation so I have added a header easily to my FakeRequest and then we  expect the right processing from my content-negotiation method and the right response.
It is important to know that FakeRequest(GET, "/football/matchs") is the same that FakeRequest(GET, "/"), it has nothing to do with the routes defined in the conf/routes file.

In our Controller [ref.code_example 1we are going to Test now the Actions:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
package com.ldg.play.test

import akka.stream.Materializer
import com.ldg.basecontrollers.{BaseController, DefaultActionBuilder, JsonActionBuilder}
import com.ldg.implicitconversions.ImplicitConversions.matchReads
import com.ldg.model.Match
import com.ldg.play.baseclass.UnitSpec
import play.api.test.FakeRequest
import play.api.libs.json._
import play.api.test.Helpers._
import play.api.http.Status.OK
import play.api.inject.guice.GuiceApplicationBuilder
import play.api.mvc.Results.Status
import scala.io.Source

class FootballActionsSpec extends UnitSpec {

  val jsonActionBuilder = new JsonActionBuilder()
  val defaultActionBuilder = new DefaultActionBuilder()
  val jsonGenericAction = new BaseController().JsonAction[Match](matchReads)

  val rightMatchJson = Source.fromURL(getClass.getResource("/rightmatch.json")).getLines.mkString
  val wrongMatchJson = Source.fromURL(getClass.getResource("/wrongmatch.json")).getLines.mkString

  implicit lazy val app: play.api.Application = new GuiceApplicationBuilder().configure().build()
  implicit lazy val materializer: Materializer = app.materializer

  /**
    * Test JsonActionBuilder:
    *
    * validate: content-type
    *           jsonBody must be specific Model
    *
    * @see com.ldg.basecontrollers.JsonActionBuilder
    *
    * Request: application/json
    *
    */

  "JsonActionBuilder with Content-Type:application/json and a right Json body" should "return a 200 Code" in {
    val request = FakeRequest(POST, "/")
      .withJsonBody(Json.parse(rightMatchJson))
      .withHeaders(("Content-Type", "application/json"))

    def action = jsonActionBuilder{ implicit request =>
      new Status(OK)
    }
    val result = call(action, request)
    status(result) shouldBe OK
  }

.......

  "JsonAction with Content-Type:application/json and a wrong Json body" should "return a 400 Code" in {
    val request = FakeRequest(POST, "/")
      .withJsonBody(Json.parse(wrongMatchJson))
      .withHeaders(("Content-Type", "application/json"))

    def action = jsonGenericAction{ implicit request =>
      new Status(OK)
    }
    val result = call(action, request)
    status(result) shouldBe BAD_REQUEST
  }

.......

  "DefaultActionBuilder with Content-Type:text/plain and a right Json body" should "return a 200 Code" in {
    val request = FakeRequest(GET, "/").withHeaders(("Accept","application/json"),("Content-Type", "text/plain"))

    def action = defaultActionBuilder{ implicit request =>
      new Status(OK)
    }
    val result = call(action, request)
    status(result) shouldBe OK
  }

.......

}
code_example 4 ref. source code ActionsSpec

We are going to highlight this import [ref. code_example 4  line 10]:
  • import play.api.test.Helpers._
The previous import will let call an action with an specific request(GET/POST....). An example of this call in [ref. code_example 4  line 48]:
  • call(action, request)
But the previous call need an implicit value, an instance of a Materializer so the best way for build it if we can not get an instance of our application is through Guice see [ref. code_example 4  line 25 - 26].
I am testing the Actions so in my case I will test only Action/ActionBuilder and some aspects related with the request. I won't test here the service that is executed under an specific action. Take a look about [ref. code_example 4  line 42] about our JsonBody in our FakeRequest.
It is not the objective of this post but you could take a look to JsonActionBuilder and DefaultActionBuilder [ref. code_example 4  line 19-20]

In our Controller [ref.code_example 1we are going to Test now the Routes:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
package com.ldg.play.test


import apirest.Routes
import com.ldg.play.baseclass.UnitSpec
import services.{MockTDataServices, TDataServices}

import play.api.inject.bind
import play.api.test.FakeRequest
import play.api.test.Helpers._
import play.api.inject.guice.GuiceApplicationBuilder

class FootballRoutesSpec extends UnitSpec {

  val app = new GuiceApplicationBuilder()
    .overrides(bind[TDataServices].to[MockTDataServices])
    .configure("play.http.router" -> classOf[Routes].getName)
    .build()

 // implicit lazy val materializer: Materializer = app.materializer

   "Request /GET/football/PRML/matchs with Content-Type:text/plain and application/json" should    "return a json file Response with a 200 Code" in {
    val Some(result) = route(app, FakeRequest(GET, "/football/matchs")      .withHeaders(("Accept", "application/json"),("Content-Type", "text/plain")))
    status(result) shouldBe OK
  }
}
code_example 5 ref. source code RoutesSpec

This aspect is not clear in PlayFramework documentations. First of all I need to create a Fake application with the same skeleton that the real so I need to do the following:
  1. Create a fake app
  2. Mock all service that I am going to use like in my real app
  3. Create my route file
The previous 3 points are solved in  [ref. code_example 5 line 15 - line 18]:
  •  [ref. code_example 5 line 15]: We create a mock using a binding. It was one of the important thing that explain at the beginning. Mocks implemented in Guice are completely different at Mockito. In Guice you will need to mock the whole object. So what we are doing here is overriding our binding with bindings to my mocked object. You can take a look of my mocked object MockTDataServices but in general you have to give fixed values in your mocked object to any definitions in your concrete class.
  • [ref. code_example 5 line 16]: I am indicating here to my Fake app the route file that I want to use. The way to indicate to my fake app the route file is overriding the route file in our case  play.http.router -> the router class indicate the route file of our app(the routes that we want to test
I have to import the auto-generated/compiled scala class Routes. PlayFramework constructs a Route class for each configuration route file. In our Routes class generated there is an important method:

def routes: PartialFunction[RequestHeader, Handler]

The aforementioned method can give us all route in our project. So if I want to test the routes of my project then a good idea should be make every request with his specific path and test if it is working properly.

Any way this post is just the tip of the iceberg. My advice is that take a look in the test module of playframework source code that is the case ScalaFunctionalTestSpec because the documentation in this point is not pretty well on playframework documentation.

So in examples like the next snapshot of code in framework documentation


1
2
3
4
5
6
7
8
"respond to the index Action" in new App(applicationWithRouter) {
  val Some(result) = route(app, FakeRequest(GET_REQUEST, "/Bob"))

  status(result) mustEqual OK
  contentType(result) mustEqual Some("text/html")
  charset(result) mustEqual Some("utf-8")
  contentAsString(result) must include("Hello Bob")
}

They never explain how get applicationWithRouter in line 1 in the above code for instance. The solution:

1
2
3
4
5
6
7
8
  val applicationWithRouter = GuiceApplicationBuilder().appRoutes { app =>
      val Action = app.injector.instanceOf[DefaultActionBuilder]
      ({
        case ("GET", "/Bob") => Action {
          Ok("Hello Bob") as "text/html; charset=utf-8"
        }
      })
    }.build()

Testing is easy and should be an style of programming, perhaps in lightbend the make it just a bit more difficult. Anyway, in my opinion, this play framework is terrific, not your documentation.