Learn

Learn about latest technology

Build

Unleash your talent and coding!

Share

Let more people use it to improve!
 

AWS EKS just configuring it manually

viernes, 28 de junio de 2019


This is a technical article focused in to create, manually, a proper configuration to sent commands from your local environment to AWS EKS Cluster.

In AWS-EKS you will need that your local kubectl CLI can talks with your AWS EKS Cluster. Every thing is oriented to install AWS CLI. So What happen if you don’t want to  install AWS CLI? In that case the configuration process to talk to the EKS cluster must be done manually. Let’s check what does it means.
Before that, you will need to know some technical concept about k8s and AWS. I suppose that you feel confortable or at least have basic knowledges in the following contents:
  1. AWS IAM: remember that you shouldn't use your AWS account root user.
  2. Kubectl CLI: command line interface for running commands against Kubernetes clusters that must be installed on your local environment.
Our utter configuration is composed for 2 files in our local environment:
  1. ~/.kube/config: We have a config file, usually KUBECONFIG, this file is used to configure access to clusters . It is a document like every definitions document in k8s with fields like (apiVersion, kind, etc) so the first thing that you need to know is the location of the cluster and the credentials to access it.
  2. ~/.aws/credentials: These are the credentials with which I going to create my cluster, authenticated with my IAM account.
At the same time we need something(software/tool) that with our credentials will make available the authentication to our k8s EKS cluster, this role is performed by aws-iam-authenticator.
You can find a proper installation guide of aws-iam-authenticator or  try with the following steps:
the below lines are referred to OSX but if you are working with linux then you will need to change in line 3 the bash file instead of .bash_profile should be .bashrc

1
2
3
curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.13.7/2019-06-11/bin/darwin/amd64/aws-iam-authenticator
chmod +x ./aws-iam-authenticator
mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && echo 'export PATH=$HOME/bin:$PATH' >> ~/.bash_profile

Testing your above aws-iam-authenticator installation: aws-iam-authenticator help

Now we are going to indicate the location/name of our kubconfig file and its content.


~/.kube/config

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
apiVersion: v1
clusters:
- cluster:
    server: <endpoint-url>
    certificate-authority-data: <base64-encoded-ca-cert>
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: aws-iam-authenticator
      args:
        - token
        - -i
        - my_cluster_name
        # - -r
        # - arn:aws:iam::835499627031:role/eksServiceRole
      env:
        - name: AWS_PROFILE
          value: my_profile_name
yaml definition 1.0

Should be interesting have a look to several lines at yaml definition 1.0 for example

In line 20 aws-iam-authenticator [command] [Flags:] it runs the cluster authentication using AWS IAM and get tokens for Kubernetes, in order to get there, we have our credential files with our access keys/secrets that we are gonna explain below.

We indicate below the location/name of our credentials file and its content.

~/.aws/credentials

[my_user_in_IAM]
aws_access_key_id=generated_aws_access_key
aws_secret_access_key=generated_aws_secret_access_key
config definition 1.0

In config definition 1.0 I indicate the structure of the credential files but you can find more about manage access keys in IAM documentation for manage access keys. In the picture below you can find in an easy way how create keys/secrets in your AWS IAM account. The same key/secrets/user_in_IAM used to create your EKS Cluster.



In reference to the aforementioned file yaml definition 1.0 we  need to get the server endpoint(line 4): <endpoint-url> in reference to below picture pay attention to Api server endpoint and the certificate-authority-data(line 5): <base64-encoded-ca-cert> in reference to below picture have a look to Certificate Authority

















This is just the configuration process,  you will need to modify the kubeconfig file (~/.kube/config) every time that you create a new cluster, it can be done by command or modifying the kubeconfig file manually. I think that you should do this process manually  at least few times  because it will help you to understand it quite well.

So you can decide, from my point of view, when I am working in Linux environment  I run  eksctl for cluster creation process but when I am working in OS X, I prefer to create the cluster and worker nodes through the aws console, it is just an opinion. So in that case(OS X) I don't install aws CLI. It is enough with  ~/.kube/config, ~/.aws/credentials and the installation of aws-iam-authenticator.

Akka an intelligent solution for concurrency

domingo, 10 de marzo de 2019


I am here because we needed a solution to a concurrency problem, access to hundreds of thousands of websites, parse and process  all of them asynchronously. I tried many options always with a main idea: Futures.
So coding in that way I made something similar to the following code:



 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
//....................

//game_1: Int value indicating first game 

//game_n: Int value indicating game n   

val resultMatch:List[Future[List[Match]]] = (game_1 until game_n).map {
 matchnum =>
  Future{
   new ParserFootbalHTML("https://www.anyurl.com/..." + matchnum).finalMatch
    }}.toList
//..............
resultMatch foreach(future => future onComplete {
 Thread.sleep(1)
 doComplete
})
//...............
def doComplete: PartialFunction[Try[List[Match]],Unit] = { //
 case matches @ Success(_) =>{
  val resultMatch:List[Match] = matches.getOrElse(List.empty[Match])
    resultMatch.foreach(matches=>
//Do anything with matches
  )
 }
 case fail @ Failure(_) => println("error")
}
//....................

With minimal changes the main idea was to launch as many "Futures" as web sites we need to parse. Then after every computation finish(onComplete) I will do "anything" with  the result. So after several test in different escenarios checking how long time did it take I decided to explore Akka, for improve performance and reduce time of calculation.
The Core of Akka libraries is the Actor Model, I recommend understand it because is the base of this libraries. Akka has made its own implementation of this model.
Now that I have understood the Actor Model I can have a look to the actor model Akka implementation.



  • Actor and Actor-1 communicate exclusively by exchanging messages asynchronously which are placed into the recipient’s MailBox.
  • Actor and Actor 1 should avoid hogging resources.
  • Message SHOULD NOT be mutable.

ONE important thing from the official documentation:

The only meaningful way for a sender to know whether an interaction was successful is by receiving a business-level acknowledgement message, which is not something Akka could make up on its own (neither are we writing a “do what I mean” framework nor would you want us to).

So as we can see this is the real concept of encapsulation. Every thing happens inside the actor and if you want to delegate a task then It send a message to the actor and keep working. For that reason you ought granulate every task in actor as simple as possible and you should have at the end  that each actor processing be as simple as possible.
Split up the task and delegate it until they become small enough to be handle in One piece!


fig 1.3

As show in fig 1.3 when you invoke the ActorSystem there are several actors that have already been created.
Actor1 is a Supervisor of Actor1.1 and Actor 1.2.

Create only ONE ActorSystem per application is considered a BestPractice because it is a very heavy object and can be create by different ways:
The ActorSystem created can be in the same context or in any other execution context created by us and declared implicitly.
As we can see in fig 1.3 there are different ways of creating actors:
  • Actors that are children of user "guardian" are created by: system.actorOf and in this case  I use to create not many of this kind of actors due to those actors will be the own root actor of my app.
  • Actor that will be child of my own actors(created by us) are created by: context.actorOf , the context is an ActorContext implicit val that there is in the Actor trait.
The ActorContext exposes contextual info for the Actor:
  • self()
  • sender()
  • watch(): used for track actor activities and know when it stopped. Expected a "Terminated"
  • unwatch()
  • actorOf() 
  • become(): change the actor behavior. We decide what message we processed and how. An actor can start processing some type of message and the change its behavior for any reason.
  • actorSelection(actor path): We can select actor for its path (ref. fig 1.3 user/actor1, user/Actor1/Actor1.2)
When we create actors then we get an ActorRef, this is an inmutable reference and identify the actor until it terminate his life. If an actor restart its ActorRef will change.

Terminate(ActorRef  that is being watched, ref watch).  When it happen the Actor free up its resources.

When we have an ActorRef:
  • ! or tell: fire a message and forget.
  • ? or ask: send a message asynchronously and return a future.  
As best practice we should try to send message via ! or tell. [Explain it]

We indicate below the best way to create and invoke different types of actors:

.............

object BootNotScheduled extends App with ScalaLogger{

  val moviePageBoundaries: List[Int] = Try(List(args(0).toInt, args(1).toInt)) match {
    case Success(expectedList) => expectedList
    case  Failure(e) => log.error("Error with the boundaries of your page numbers,reviews your parameters {}",e.toString)
      System.exit(1)
      Nil
  }

  // This actor will control the whole System ("IoTDaemon_Not_Scheduled")
  val system = ActorSystem(ConstantsObject.NotScheduledIoTDaemon)

// creating ioTAdmin actor  
val ioTAdmin = system.actorOf(IoTAdmin.props, ConstantsObject.NotScheduledIoTDaemonProcessing)
  val filteredAndOrderedList = moviePageBoundaries.toSet.toList.sorted
  val filteredAndOrderedListGamesBoundaries = (filteredAndOrderedList.head to filteredAndOrderedList.last).toList

// Sending a message to ioTAdmin actor
ioTAdmin!ParseUrlFilms(filteredAndOrderedListGamesBoundaries,Option(ioTAdmin))
}
ref. to BootNotScheduled code in my github

Props are config classes to specify options when you create actors. You can find below an snapshot configuring the actor ioTAdmin previously created in the above code:

........

object IoTAdmin {
  case class ParseUrlFilms(listTitles: List[moviePage], actorRef: Option[ActorRef]=None)
  case object ErrorStopActor
  case object StopProcess
  def props: Props = Props(new IoTAdmin)
}

...

override def receive = {
.....

}
ref. to IoTAdmin code in my github

The way how I deal  with the configuration is similar as used to be done in most libraries or framework or even in java language. I read the config info from application.conf but there are several way for do  that you can find it in Akka config information, it does not explain at all how deal with it.

What do you have to consider when deal with configuration files in akka:
  • Akka configuration values. We will use default values for the moment.
  • Where the configuration file should be saved and how name it.   
  • How the configuration files are loaded or read. [related class/interfaces: ConfigFactory, Config
The configuration files follow the HOCON specification and we can find a very simple examples on github that will let us create config files easily.

The main akka config files use to be application.conf and uses to be placed in the "resource" folder.
In our real life we usually need to work in different environments and some time we create our application.conf during the compilation process  other time we include in our specifics configurations depending on the environments in which we are working on. We have created a real world config example in which you can configure  a production(production.conf) or development(development.conf) environment. This is not the focus of this post  so if you want learn how work, setting and manage config file you can go to my post about how generating distributions for different environments.

We already have minimal information  to create an Akka App only using the actor module  anyway you can have a look to my github repository akka-quickstart-scala and check about how install and launching it(try the first option in your terminal[sbt "AkkaSchedulerPoc/runMain com.ldg.BootNotScheduled 1 5")]

We have talked in previous posts in this blog about the akka libraries, specifically about schedulers but in this case we post about  the first steps in akka and its core, the actor model. We gonna see other features in next posts  like:
Router
Dispatcher
Akka Test Kit
Akka HTTP




Generating scala application distribution in akka & play for different environments

martes, 12 de febrero de 2019

I have to much for talk about sbt. I have been using this build tooling for a long time and honestly every time that I have to do something  different I need to check sbt documentation. I’ve been thinking in something like sbt in a nutshell, ha ha. Forget about it!. At the same time too many people think that SBT is confusing, complicated, and opaque  and there are a lot of explanation about why. I won't do that in this article.
My target today: how generate distribution package for different environments and the most important  just in one command line clean + compile + build distribution +specific environment

What we need:
  1. pass parameters to our process indicating what environment do you want to build. 
  2. create a plugin to read the parameters. 
  3. create a task that will let you add to the project the appropriate configuration files before the compilation process. 
  4. create a command that have to package the distribution with the suitable option.
I am going to use sbt native packager  for generate distribution for different environments. You don't need to be familiar with this libraries for understand this post but if you want to generate other distributions different than we have explained here my recommendation is a learning in-depth of it.

How to solve point 1:

The first thing that we need to do is to pass a variable in our building process:

sbt -Dour_system_property_var=value_to_assign

So in our case we need to indicate the environment that we are building for:
I am gonna read this environment and set the proper configuration file that will be added to our jar distributions.

You need to coding what to do, considering the parameters that has been passed. You can take some reference about SBT parameters and Build Environment.

Our next steps :
  1. read variables to indicate what environment we want the generation of the new package distribution.
  2. depending of our parameter we need to indicate what config files we want to use in our compilation process. 
To carry out the aforementioned process we need to create an sbt plugin(BuildEnvPlugin.scala):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
import sbt.Keys._
import sbt._
import sbt.plugins.JvmPlugin

/**
  * make call in this way for generate development distro
  * sbt -Denv=dev reload clean compile stage
  *
  * make call in this way for generate production distro
  * sbt -Denv=prod reload clean compile stage
  *
  */

/** sets the build environment */
object BuildEnvPlugin extends AutoPlugin {

  // make sure it triggers automatically
  override def trigger = AllRequirements
  override def requires = JvmPlugin

  object autoImport {
    object BuildEnv extends Enumeration {
      val Production, Test, Development = Value
    }
    val buildEnv = settingKey[BuildEnv.Value]("the current build environment")
  }
  import autoImport._

  override def projectSettings: Seq[Setting[_]] = Seq(
    buildEnv := {
      sys.props.get("env")
        .orElse(sys.env.get("BUILD_ENV"))
        .flatMap {
          case "prod" => Some(BuildEnv.Production)
          case "test" => Some(BuildEnv.Test)
          case "dev" => Some(BuildEnv.Development)
          case unkown => None
        }
        .getOrElse(BuildEnv.Development)
    },
    // message indicating in what environment you are building on
    onLoadMessage := {
      val defaultMessage = onLoadMessage.value
      val env = buildEnv.value
      s"""|$defaultMessage
          |Running in build environment: $env""".stripMargin
    }
  )
}
code 1.0

The plugin code above return some build environment that need to be read for any other process.  In a first solution we are going to solve this problem in the main build.sbt file.

In our main build.sbt file (snapshot of code below) depending of the environment selected we are going to copy the proper configuration file to conf/application.conf configuration file  @see line 9 in code below, code 1.1, pay attention that buildEnv.value in line 4 is created/loaded in our customized sbt plugin (code above, code 1.0)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
.............

mappings in Universal += {
  val confFile = buildEnv.value match {
    case BuildEnv.Development => "development.conf"
    case BuildEnv.Test => "test.conf"
    case BuildEnv.Production => "production.conf"
  }
  ((resourceDirectory in Compile).value / confFile) -> "conf/application.conf"
}

.............
code 1.1

Once that you have selected the environment we need to copy all suitable configuration files, afterwards we need to add this code to a task.
Just in this point can pass parameters to our process, indicating what environment we want to build, we have code a plugin than can read parameters. We need to create a command as simple as possible CommandScalaPlay.scala, it will let us to execute in one line reload + clean + compile + tgz distribution of our project.

 1
 2
 4
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
import sbt._

object CommandScalaPlay {

/** https://www.scala-sbt.org/sbt-native-packager/gettingstarted.html#native-tools */
  // A simple, multiple-argument command that prints "Hi" followed by the arguments.
  // Again, it leaves the current state unchanged.
  // launching from console a production building:  sbt -Denv=prod buildAll
  def buildAll = Command.args("buildAll", "<name>") { (state, args) =>
    println("buildAll command generating tgz in Universal" + args.mkString(" "))
    "reload"::"clean"::"compile"::"universal:packageZipTarball"::
    state
  }
}
code 1.2

In the above code in code 1.2 we have create a very simple command that   reload + clean + compile + tgz distribution of our project in line 11. You can glance over sbt-native-packager output format in universal if  you are interested in other output format different than tgz.

We need to add the command create above in code 1.3 in our build.sbt main file. So in code below we add the command @see line 11 in code below, code 1.3.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
lazy val commonSettings = Seq(
   name:= """sport-stats""",
   organization := "com.mojitoverdeintw",
   version := "1.0",
   scalaVersion := "2.11.7"
 )
................

settings(commonSettings,
    resourceDirectory in Compile := baseDirectory.value / "conf",
    commands ++= Seq(buildAll))
................
code 1.3

In code above we say to the compiler that our resource directory will be /conf @see line 10 in code above, code 1.3. It shouldn't be necessary because resource directory( resourceDirectory in sbt) in Scala apps sbt resourceDirectory point to src/main/resources but in play framework should be point to /conf.

If you want to create an specific distribution:

go to your HOME_PROJECT and then:
  1. a tgz for production env sbt -Denv=prod buildAll
  2. a tgz for development env sbt -Denv=dev buildAll
  3. a universal for development env sbt -Denv=dev stage
  4. a universal for production env sbt -Denv=dev stage
If you want to implement the above scenario in your own project you need take a look at the following points:
  1. sbt version: in the above scenario is 0.13.11,  reference file: build.properties in my github repository.
  2. plugins.sbt(in my github repository): pay attention to this line: addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.3.2") in my github repository.
  3. build.sbt(in my github repository): Pay attention to this file, mainly about code 1.1 y code 1.4 below in this post.
Take a look about image below where should be the aforementioned files (CommandScalaPlay.scalaBuildEnvPlugin.scalaplugins.sbt)



                                         fig 1.1

Remember that in our configuration folder(in playframework) /conf should be all configuration files that we want in different environments during the distro generation process so at the end one of those files will be our application.conf configuration file. 

In other scenario Instead of create the application.conf file in our main build.sbt file, I reckon that it should be done in code because build.sbt should be as clean as possible. 


                                          fig 1.2


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
import sbt._
import sbt.Keys._
import sbt.plugins.JvmPlugin

/**
  * make call in this way for generate development distro
  * sbt -Denv=dev reload clean compile stage
  *
  * make call in this way for generate production distro
  * sbt -Denv=prod reload clean compile stage
  *
  */

/** sets the build environment */
object BuildEnvPlugin extends AutoPlugin {

  ............................

  lazy val configfilegeneratorTask = Def.task {
    val confFile = buildEnv.value match {
      case BuildEnv.Development => "development.conf"
      case BuildEnv.Test => "test.conf"
      case BuildEnv.Stage => "stage.conf"
      case BuildEnv.Production => "production.conf"
    }
    val filesrc = (resourceDirectory in Compile).value / confFile
    val file = (resourceManaged in Compile).value / "application.conf"
    IO.copyFile(filesrc,file)
    Seq(file)
  }
.....................
}
code 1.4

Pay special attention to the task definition configfilegeneratorTask in line 19  in above code(code 1.4) in BuildEnvPlugin.scala(in my github repository)  and then modify the build.sbt file. See the code below and changes that have to be done in our main build.sbt(in my github repository).


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
...................

lazy val root = (project in file(".")).
  aggregate(internetOfThings).
  aggregate(AkkaSchedulerPoc).
  settings(commonSettings,commands ++= Seq(buildAll))

lazy val AkkaSchedulerPoc = (project in file("modules/AkkaScheduler")).
  settings(commonSettings,libraryDependencies ++= Seq(
    "com.enragedginger" %% "akka-quartz-scheduler" % "1.6.1-akka-2.5.x"
 ),resourceGenerators in Compile += configfilegenerator)

...................
code 1.5

Have a look in above code(code 1.5) line 11, here we're adding our task configfilegenerator in our sbt compilation process. You can have a look to this files: build.properties, plugins.sbt, build.sbt

The reference about the last scenario is hosted in other project different than the first scenario but

Remember that plugin and task should be under root project folder. (fig 1.1 above in our post)

Remember that in our configuration folder (in akka) /resource folder  /conf should be all configuration files that we want in different environments during the distro creation process so at the end one of those files will be our application.conf configuration file. 



                                                 fig 1.3
                                 

You need to keep in mind when you are working with autoplugin, when you create task o the other object that execute any processes in an specific phase that you could face some problems about initialization process because some setting are loaded twice. So the sbt initialization could be a useful reading.
In next posts I will explain through code other important implementation when we are working with sbt.

There are different ways to generate distributions in different environments, SBT has many other solutions, but in my opinion they are specific to each file and not for a complete configuration like (log specification file + application configuration file + other properties or metadata file that can change when we change environment)

In this Post we show you examples in Playframework and Akka but how to solve if you want to generate any implementation or distribution without any plugins so you can choose the options that SBT give you:

  • sbt -Dconfig.file=<Path-to-my-specific-config-file>/my-environment-conf-file.conf other-phases 

An example about the aforementioned statement:

     sbt -Dconfig.file=/home/myusr/development.conf reload clean compile stage 

In the same way you can use -Dconfig.resource so in this case you don’t need to specify the path to the file. Sbt will try to find out in the resource directory.

You can do the same with logging configuration so in the same way:

  • sbt -Dlogger.file  <Path-to-my-specific-logger-config-file>/my-environment-logger-conf-file.conf  

or like in config.resource you can use -Dlogger.resource  and in this case you don’t need to specify the path to the log configuration file 'cos the file will be loaded from the classpath.

There are some interesting links in Sbt explaining how deal with distro generation process in different environments :

  1. Specifying different configurations files. 
  2. Specifying different loggings configurations files. 

Something to keep in mind: to use configurations files properly pay attention to HOCON (Human-Optimized Config Object Notation) and how it work. It could be very helpful. We can tackle this subject in next posts

We agreed that SBT is not easy, it is a powerful tool but it is not easy. Perhaps for that reason developer don't use it if they can use any other build tooling. Any way I will be posting some problems that we face when we are coding with scala and SBT make easier our work.