[repost ]Maven实战(九)——打包的技巧

original:http://www.infoq.com/cn/news/2011/06/xxb-maven-9-package

“打包“这个词听起来比较土,比较正式的说法应该是”构建项目软件包“,具体说就是将项目中的各种文件,比如源代码、编译生成的字节码、配置文件、文档,按照规范的格式生成归档,最常见的当然就是JAR包和WAR包了,复杂点的例子是Maven官方下载页面的分发包,它有自定义的格式,方便用户直接解压后就在命令行使用。作为一款”打包工具“,Maven自然有义务帮助用户创建各种各样的包,规范的JAR包和WAR包自然不再话下,略微复杂的自定义打包格式也必须支持,本文就介绍一些常用的打包案例以及相关的实现方式,除了前面提到的一些包以外,你还能看到如何生成源码包、Javadoc包、以及从命令行可直接运行的CLI包。

Packaging的含义

任何一个Maven项目都需要定义POM元素packaging(如果不写则默认值为jar)。顾名思义,该元素决定了项目的打包方式。实际的情形中,如果你不声明该元素,Maven会帮你生成一个JAR包;如果你定义该元素的值为war,那你会得到一个WAR包;如果定义其值为POM(比如是一个父模块),那什么包都不会生成。除此之外,Maven默认还支持一些其他的流行打包格式,例如ejb3和ear。你不需要了解具体的打包细节,你所需要做的就是告诉Maven,”我是个什么类型的项目“,这就是约定优于配置的力量。

为了更好的理解Maven的默认打包方式,我们不妨来看看简单的声明背后发生了什么,对一个jar项目执行mvn package操作,会看到如下的输出:

[INFO] --- maven-jar-plugin:2.3.1:jar (default-jar) @ git-demo ---
[INFO] Building jar: /home/juven/git_juven/git-demo/target/git-demo-1.2-SNAPSHOT.jar

相比之下,对一个war项目执行mvn package操作,输出是这样的:

[INFO] --- maven-war-plugin:2.1:war (default-war) @ webapp-demo ---
[INFO] Packaging webapp
[INFO] Assembling webapp [webapp-demo] in [/home/juven/git_juven/webapp-demo/target/webapp-demo-1.0-SNAPSHOT]
[INFO] Processing war project
[INFO] Copying webapp resources [/home/juven/git_juven/webapp-demo/src/main/webapp]
[INFO] Webapp assembled in [90 msecs]
[INFO] Building war: /home/juven/git_juven/webapp-demo/target/webapp-demo-1.0-SNAPSHOT.war

对应于同样的package生命周期阶段,Maven为jar项目调用了maven-jar-plugin,为war项目调用了maven-war-plugin,换言之,packaging直接影响Maven的构建生命周期。了解这一点非常重要,特别是当你需要自定义打包行为的时候,你就必须知道去配置哪个插件。一个常见的例子就是在打包war项目的时候排除某些web资源文件,这时就应该配置maven-war-plugin如下:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-war-plugin</artifactId>
    <version>2.1.1</version>
    <configuration>
      <webResources>
        <resource>
          <directory>src/main/webapp</directory>
          <excludes>
            <exclude>**/*.jpg</exclude>
          </excludes>
        </resource>
      </webResources>
    </configuration>
  </plugin>

源码包和Javadoc包

本专栏的《坐标规划》一文中曾解释过,一个Maven项目只生成一个主构件,当需要生成其他附属构件的时候,就需要用上classifier。源码包和Javadoc包就是附属构件的极佳例子。它们有着广泛的用途,尤其是源码包,当你使用一个第三方依赖的时候,有时候会希望在IDE中直接进入该依赖的源码查看其实现的细节,如果该依赖将源码包发布到了Maven仓库,那么像Eclipse就能通过m2eclipse插件解析下载源码包并关联到你的项目中,十分方便。由于生成源码包是极其常见的需求,因此Maven官方提供了一个插件来帮助用户完成这个任务:

  <plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-source-plugin</artifactId>
    <version>2.1.2</version>
    <executions>
      <execution>
        <id>attach-sources</id>
        <phase>verify</phase>
        <goals>
          <goal>jar-no-fork</goal>
        </goals>
      </execution>
    </executions>
  </plugin>

类似的,生成Javadoc包只需要配置插件如下:

  <plugin>          
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-javadoc-plugin</artifactId>
    <version>2.7</version>
    <executions>
      <execution>
        <id>attach-javadocs</id>
          <goals>
            <goal>jar</goal>
          </goals>
      </execution>
    </executions>
  </plugin>

为了帮助所有Maven用户更方便的使用Maven中央库中海量的资源,中央仓库的维护者强制要求开源项目提交构件的时候同时提供源码包和Javadoc包。这是个很好的实践,读者也可以尝试在自己所处的公司内部实行,以促进不同项目之间的交流。

可执行CLI包

除了前面提到了常规JAR包、WAR包,源码包和Javadoc包,另一种常被用到的包是在命令行可直接运行的CLI(Command Line)包。默认Maven生成的JAR包只包含了编译生成的.class文件和项目资源文件,而要得到一个可以直接在命令行通过java命令运行的JAR文件,还要满足两个条件:

  • JAR包中的/META-INF/MANIFEST.MF元数据文件必须包含Main-Class信息。
  • 项目所有的依赖都必须在Classpath中。

Maven有好几个插件能帮助用户完成上述任务,不过用起来最方便的还是maven-shade-plugin,它可以让用户配置Main-Class的值,然后在打包的时候将值填入/META-INF/MANIFEST.MF文件。关于项目的依赖,它很聪明地将依赖JAR文件全部解压后,再将得到的.class文件连同当前项目的.class文件一起合并到最终的CLI包中,这样,在执行CLI JAR文件的时候,所有需要的类就都在Classpath中了。下面是一个配置样例:

  <plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-shade-plugin</artifactId>
    <version>1.4</version>
    <executions>
      <execution>
        <phase>package</phase>
        <goals>
          <goal>shade</goal>
        </goals>
        <configuration>
          <transformers>
            <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
              <mainClass>com.juvenxu.mavenbook.HelloWorldCli</mainClass>
            </transformer>
          </transformers>
        </configuration>
      </execution>
    </executions>
  </plugin>

上述例子中的,我的Main-Class是com.juvenxu.mavenbook.HelloWorldCli,构建完成后,对应于一个常规的hello-world-1.0.jar文件,我还得到了一个hello-world-1.0-cli.jar文件。细心的读者可能已经注意到了,这里用的是cli这个classifier。最后,我可以通过java -jar hello-world-1.0-cli.jar命令运行程序。

自定义格式包

实际的软件项目常常会有更复杂的打包需求,例如我们可能需要为客户提供一份产品的分发包,这个包不仅仅包含项目的字节码文件,还得包含依赖以及相关脚本文件以方便客户解压后就能运行,此外分发包还得包含一些必要的文档。这时项目的源码目录结构大致是这样的:

pom.xml
src/main/java/
src/main/resources/
src/test/java/
src/test/resources/
src/main/scripts/
src/main/assembly/
README.txt

除了基本的pom.xml和一般Maven目录之外,这里还有一个src/main/scripts/目录,该目录会包含一些脚本文件如run.sh和run.bat,src/main/assembly/会包含一个assembly.xml,这是打包的描述文件,稍后介绍,最后的README.txt是份简单的文档。

我们希望最终生成一个zip格式的分发包,它包含如下的一个结构:

bin/
lib/
README.txt

其中bin/目录包含了可执行脚本run.sh和run.bat,lib/目录包含了项目JAR包和所有依赖JAR,README.txt就是前面提到的文档。

描述清楚需求后,我们就要搬出Maven最强大的打包插件:maven-assembly-plugin。它支持各种打包文件格式,包括zip、tar.gz、tar.bz2等等,通过一个打包描述文件(该例中是src/main/assembly.xml),它能够帮助用户选择具体打包哪些文件集合、依赖、模块、和甚至本地仓库文件,每个项的具体打包路径用户也能自由控制。如下就是对应上述需求的打包描述文件src/main/assembly.xml:

<assembly>
  <id>bin</id>
  <formats>
    <format>zip</format>
  </formats>
  <dependencySets>
    <dependencySet>
      <useProjectArtifact>true</useProjectArtifact>
      <outputDirectory>lib</outputDirectory>
    </dependencySet>
  </dependencySets>
  <fileSets>
    <fileSet>
      <outputDirectory>/</outputDirectory>
      <includes>
        <include>README.txt</include>
      </includes>
    </fileSet>
    <fileSet>
      <directory>src/main/scripts</directory>
      <outputDirectory>/bin</outputDirectory>
      <includes>
        <include>run.sh</include>
        <include>run.bat</include>
      </includes>
    </fileSet>
  </fileSets>
</assembly>
  • 首先这个assembly.xml文件的id对应了其最终生成文件的classifier。
  • 其次formats定义打包生成的文件格式,这里是zip。因此结合id我们会得到一个名为hello-world-1.0-bin.zip的文件。(假设artifactId为hello-world,version为1.0)
  • dependencySets用来定义选择依赖并定义最终打包到什么目录,这里我们声明的一个depenencySet默认包含所有所有依赖,而useProjectArtifact表示将项目本身生成的构件也包含在内,最终打包至输出包内的lib路径下(由outputDirectory指定)。
  • fileSets允许用户通过文件或目录的粒度来控制打包。这里的第一个fileSet打包README.txt文件至包的根目录下,第二个fileSet则将src/main/scripts下的run.sh和run.bat文件打包至输出包的bin目录下。

打包描述文件所支持的配置远超出本文所能覆盖的范围,为了避免读者被过多细节扰乱思维,这里不再展开,读者若有需要可以去参考这份文档

最后,我们需要配置maven-assembly-plugin使用打包描述文件,并绑定生命周期阶段使其自动执行打包操作:

  <plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-assembly-plugin</artifactId>
    <version>2.2.1</version>
    <configuration>
      <descriptors>
        <descriptor>src/main/assembly/assembly.xml</descriptor>
      </descriptors>
    </configuration>
    <executions>
      <execution>
        <id>make-assembly</id>
        <phase>package</phase>
        <goals>
          <goal>single</goal>
        </goals>
      </execution>
    </executions>
  </plugin>

运行mvn clean package之后,我们就能在target/目录下得到名为hello-world-1.0-bin.zip的分发包了。

小结

打包是项目构建最重要的组成部分之一,本文介绍了主流Maven打包技巧,包括默认打包方式的原理、如何制作源码包和Javadoc包、如何制作命令行可运行的CLI包、以及进一步的,如何基于个性化需求自定义打包格式。这其中涉及了很多的Maven插件,当然最重要,也是最为复杂和强大的打包插件就是maven-assembly-plugin。事实上Maven本身的分发包就是通过maven-assembly-plugin制作的,感兴趣的读者可以直接查看源码一窥究竟。

[repost ]Maven 是怎样创建War 包?

original:http://my.oschina.net/u/939534/blog/173863

      最近在网上看到一篇介绍maven基础知识的文章,觉得对初学Maven的朋友一定有帮助。水平有限,翻译的不好,请大家见谅。

介绍

在处理WEB应用的时候,最终使用的工程文件是以War包的形式交付。Maven编译系统可以轻松的创建War包。接下来就让我们看看Maven是如何把一个源文件的工程转换成War包的。

Maven 版本 Apache Maven 3.0.4

工程实例

让我们来看看这个非常典型的Maven化的WEB工程

对应的POM.xml如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>mygroup.com</groupId>
  <artifactId>myprojectname</artifactId>
  <packaging>war</packaging>
  <version>1.0-SNAPSHOT</version>
  <name>myprojectname Maven Webapp</name>
  <url>http://maven.apache.org</url>
  <dependencies>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>3.8.1</version>
      <scope>test</scope>
    </dependency>
  </dependencies>
  <build>
    <finalName>myprojectname</finalName>
  </build>
</project>

我们用此命令War包

1
mvn package
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
C:\Projects\myprojectname>mvn package
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building myprojectname Maven Webapp 1.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
...
[INFO] --- maven-war-plugin:2.1.1:war (default-war) @ myprojectname ---
[INFO] Packaging webapp
[INFO] Assembling webapp [myprojectname] in [C:\Projects\myprojectname\target\myprojectname]
[INFO] Processing war project
[INFO] Copying webapp resources [C:\Projects\myprojectname\src\main\webapp]
[INFO] Webapp assembled in [18 msecs]
[INFO] Building war: C:\Projects\target\myprojectname.war
...

War生成在根目录下

1
/target/myprojectname.war

如下图,概况Maven生成War包过程

 

Maven 默认配置

我们都知道Maven可以很容易的把源文件工程创建为War包,但是POM文件中什么也没有设置。这是怎么回事啊?实际上Maven有自己默认的设置。这称之为 “convention over configuration”,Maven在配置中提供默认值。

第一,因为有一些Maven插件本身就与Maven 的生命周期绑定在一起。例如,在编译阶段使用

compiler:compile 作为默认命令。这就意味着当执行到编译阶段compiler plugin 被调用执行。如果选择生成WAR,那么 war:war 会与这个阶段绑定。

第二,当没有明确设置参数的时候,插件都会有自己默认值。例如 compiler:compile 目标有个参数是 compilerId。当默认值是 javac就意味着JDK 会被使用。当需要生成其他形式时可以重写此配置。

第三,一些设置包含在 Super POM,此文件是POM文件默认继承的。从Mavne3 起 Super POM 被放在

1
maven_dir/lib/maven-model-builder-3.0.3.jar:org/apache/maven/model/pom-4.0.0.xml

在这里我们可以发现很多默认的配置信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<build>
    <directory>${project.basedir}/target</directory>
    <outputDirectory>${project.build.directory}/classes</outputDirectory>
    <finalName>${project.artifactId}-${project.version}</finalName>
    <testOutputDirectory>${project.build.directory}/test-classes</testOutputDirectory>
    <sourceDirectory>${project.basedir}/src/main/java</sourceDirectory>
    <scriptSourceDirectory>src/main/scripts</scriptSourceDirectory>
    <testSourceDirectory>${project.basedir}/src/test/java</testSourceDirectory>
    <resources>
      <resource>
        <directory>${project.basedir}/src/main/resources</directory>
      </resource>
    </resources>
    <testResources>
      <testResource>
        <directory>${project.basedir}/src/test/resources</directory>
      </testResource>
    </testResources>
  ...
  </build>

 Maven 生命周期

在我们的工程中,当执行 mvn package 命令,maven会执行它整个生命周期中的六个阶段

1
process-resources, compile, process-test-resources, test-compile, test and package

每个阶段会包含一个或多个目标。Maven 插件提供目标:一个插件可以有一个或多个目标。例如

Compiler 插件有两个目标:compiler:compile  和 compiler:testCompile

我们可以使用 mvn help:describe -Dcmd=phasename 命令列出如下内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
C:\Project\myprojectname>mvn help:describe -Dcmd=package
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building myprojectname Maven Webapp 1.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-help-plugin:2.1.1:describe (default-cli) @ myprojectname ---
[INFO] 'package' is a phase corresponding to this plugin:
org.apache.maven.plugins:maven-war-plugin:2.1.1:war
 
It is a part of the lifecycle for the POM packaging 'war'. This lifecycle includes the following phases:
* validate: Not defined
* initialize: Not defined
* generate-sources: Not defined
* process-sources: Not defined
* generate-resources: Not defined
* process-resources: org.apache.maven.plugins:maven-resources-plugin:2.5:resources
* compile: org.apache.maven.plugins:maven-compiler-plugin:2.3.2:compile
* process-classes: Not defined
* generate-test-sources: Not defined
* process-test-sources: Not defined
* generate-test-resources: Not defined
* process-test-resources: org.apache.maven.plugins:maven-resources-plugin:2.5:testResources
* test-compile: org.apache.maven.plugins:maven-compiler-plugin:2.3.2:testCompile
* process-test-classes: Not defined
* test: org.apache.maven.plugins:maven-surefire-plugin:2.10:test
* prepare-package: Not defined
* package: org.apache.maven.plugins:maven-war-plugin:2.1.1:war
* pre-integration-test: Not defined
* integration-test: Not defined
* post-integration-test: Not defined
* verify: Not defined
* install: org.apache.maven.plugins:maven-install-plugin:2.3.1:install
* deploy: org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy
 
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.496s
[INFO] Finished at: Sat May 12 04:30:35 CEST 2012
[INFO] Final Memory: 5M/121M
[INFO] ------------------------------------------------------------------------

下面让我们看看每个目标

1.resources:resources

   此目标用来将资源文件夹下的内容拷贝到输出目录

 2.compiler:compile

     此目标编译源项目工程

3.resources:testResources

    此目标拷贝测试资源到测试输出目录

4.compiler:testCompile

       此目标编译测试项目

5.surefire:test

此目标执行工程的单元测试,编译的测试类放在 /target/test-classes

6.war:war

此目录创建War包。它会把所有需要的文件放在

1
/target/myprojectname/

而后将他们打包生成 **.war。其中一个步骤是将  /src/main/webapp/  拷贝到输出目录。

War插件另外一个重要步骤是拷贝Class文件到到 WEB-INF/classes目录和项目所依赖的jar包到 WEB-INF/lib目录。

默认情况下,插件还包含两个Maven描述文件:

  1. META-INF/maven/${groupId}/${artifactId}/pom.xml
  2. pom.properties 文件,META-INF/maven/${groupId}/${artifactId}/pom.properties
1
2
3
4
5
<strong>#Generated by Maven
#Sat May 12 00:50:42 CEST 2012
version=1.0-SNAPSHOT
groupId=mygroup.com
artifactId=myprojectname</strong>


最终的War包放在/target/目录下。

项目依赖

pom.xml文件会有一个默认的(JUnit)依赖。我们可以加另外一个常用的Jar  log4j。

1
2
3
4
5
<dependency>
    <groupId>log4j</groupId>
    <artifactId>log4j</artifactId>
    <version>1.2.16</version>
</dependency>

当没有设置依赖范围 scope),默认为 compile scope。这就意味着此依赖在编译、测试、运行阶段都可以得到。

只要是运行中会用到的jar包,都会拷贝到 /WEB-INF/lib目录

[repost ]Bluemix Launches IBM Containers Beta Based on Docker

original:https://developer.ibm.com/bluemix/2014/12/04/ibm-containers-beta-docker/

At DockerCon Europe today we announced the availability of the beta of our new IBM container service on Bluemix. As a developer you can now leverage the power of Docker to deploy a container on Bluemix. The IBM Containers service is super easy to use. No need to deploy and manage Virtual Machines, set up Docker engine, or manage your own registries. And it is loaded with powerful capabilities.

  • Native container hosting, no need to manage the docker infrastructure yourself
  • Powerful networking, including support for real IP addresses on containers and private networking between your containers
  • Support for private Docker image registries, allowing you to store and share your images within your Organization
  • Integration with other services in Bluemix, allowing you to bind others services to your container application with ease
  • Standard IBM images for WebSphere Liberty and Node.js runtime environments
  • All based on standard Docker runtime and APIs
Get Started with IBM Containers

Let’s take a quick tour.

Getting Started

Simply log into your Bluemix account, go to the catalog and select the “IBM Containers” service.

containers blog 1 1024x431 Bluemix Launches IBM Containers Beta Based on Docker

You are allowed a single instance of the service per Organization in Bluemix. Right now for the beta you will be allowed to 2GB of memory, 2 Public IP addresses to attach to your containers, and 8 containers total. These limits will change in the future. Create the instance as normal.

containers blog 2 1024x397 Bluemix Launches IBM Containers Beta Based on Docker

Demand for the service is high right now, so once you create the service you may be placed in the queue and receive a notification like this.

containers blog 3 1024x528 Bluemix Launches IBM Containers Beta Based on Docker

Once your service instance is activated you will see the getting started screen. You will be asked to provide a name for your private Docker image registry. You will also see your API key that you will need to access the service from your command line. Make sure you have docker and the IBM Containers (ice) tools installed on your machine using the instructions provided.

containers blog 4 1024x440 Bluemix Launches IBM Containers Beta Based on Docker

Run a Container

Let’s get started and run a container. The IBM Containers service comes with images for WebSphere Liberty and Node.js. And of course you can use any image from Docker Hub. Let’s use the Liberty image to start a basic application service container. That is easy to do. Simply do the following:

> ice run --name liberty registry-ice.ng.bluemix.net/ibmliberty

This will launch your container on bluemix and give it a private IP address. Run “> ice ps” to see the running instance. You should get something like this:

123456
——————————————————————————————————————————–
Container Id Image Id Command Created Status Name Private IP Public IP Priv/Pub Ports
——————————————————————————————————————————–
67f1a5ec-5d22-4c9a b679b48c-43d7-484e Dec 4 12:44 Running liberty 172.16.35.247
-9f3e-8e0122f4b6b8 -b9b0-b2bf2baecc58
——————————————————————————————————————————–
view rawbmixcontainer1 hosted with ❤ by GitHub

In order to send a request to this container you need to give it a public IP. This is easy as well. First request a public IP address.

$ ice ip request
Successfully obtained ip: "129.41.248.43"

Now attach that IP address to your running container

$ ice ip bind 129.41.248.43 liberty
Successfully bound ip

If you run “ice ps” again you will see that the container is now accessible on the new IP address.

123456
——————————————————————————————————————————–
Container Id Image Id Command Created Status Name Private IP Public IP Priv/Pub Ports
——————————————————————————————————————————–
67f1a5ec-5d22-4c9a b679b48c-43d7-484e Dec 4 12:44 Running liberty 172.16.35.247 129.41.248.43
-9f3e-8e0122f4b6b8 -b9b0-b2bf2baecc58
——————————————————————————————————————————–
view rawbmixcontainer2 hosted with ❤ by GitHub

You can then go to that IP address to access your container.

Use a custom image

Let’s do one last thing. Let’s add an application to that WebSphere Liberty Image and run the new image. To do this we are going to build a new image using Dockerfile. To start lets pull the Liberty image to your local machine.

$ docker pull registry-ice.ng.bluemix.net/ibmliberty

Next lets create a Dockerfile with your app in it. For this example I am using a simple hello WAR file. Your Dockerfile would look something like this.

>>
FROM registry-ice.ng.bluemix.net/ibmliberty
ADD hello.war /opt/ibm/wlp/usr/servers/defaultServer/dropins/
ENV LICENSE accept

>

Now you can build the image

$ docker build -t hello .

Now that we have our new image lets push it to your private registry on the IBM Containers service and run it. To push it do the following:

$ docker tag hello registry-ice.ng.bluemix.net//hello (replace with the private registry namespace you selected at the beginning)
$ docker push registry-ice.ng.bluemix.net//hello

Now all you have to do is run it like above and bind the IP address.

$ ice run --name hello registry-ice.ng.bluemix.net//hello
$ ice ip bind

hello

The Service UI

In addition to the command line interface you can view the containers you are running and the images in your registry via the Bluemix UI. Simply go to the IBM Containers service tile in your Dashboard. Select the containers tab to see your running containers…

containers blog 5 1024x461 Bluemix Launches IBM Containers Beta Based on Docker

…and the Images tab to see your images.

containers blog 6 1024x382 Bluemix Launches IBM Containers Beta Based on Docker

Check back often for more information about this exciting new capability as it rapidly evolves over the coming weeks.

[repost ]Getting started with Liberty profile

original:https://developer.ibm.com/wasdev/docs/category/getting-started/

Getting Started with the WAS Liberty Profile

https://developer.ibm.com/wasdev/docs/getting_started_with_the_was_liberty_profile/

Important: These modules and exercises have been created for use with only the WAS v8.5 Liberty Developer Edition and the WAS v8.5 Liberty Server inside Eclipse. If your configuration differs, you might have to adapt the exercise instructions.

LAB 0 – Eclipse Labs

LAB 1 – Install the WAS Liberty Profile

o       In this lab, you will learn about WASdev, the developer community for WebSphere Application Server. 

o       You will also install the Liberty runtime and create your first Liberty server

LAB 2 – Your First Application

o       In this lab, you will write your first Liberty application and deploy and test it on IBM WebSphere Application Server V8.5 Liberty Profile

o       In this module, you will review the IBM WebSphere Application Server V8.5 Liberty Profile configuration

o       The server is pre-configured ready to run a servlet, so you will not need to modify the server configuration

o       You will deploy and run your Hello World servlet

LAB 3 – Liberty and Java Persistence API (Derby or DB2)

o       In this lab, you will learn about the Java Persistence API (JPA)

o       You will develop a JPA application

o       You will configure the WebSphere Application Server V8.5 Liberty Profile and run your JPA application with Derby

o       In this lab, you will learn about the Java Persistence API (JPA)

o       You will develop a JPA application

o       You will configure the WebSphere Application Server V8.5 Liberty Profile and run your JPA application with DB2

o       If you prefer to use Derby, complete module 1.1 instead

o       In this module, you will learn how to configure the Liberty profile to develop an application with database interaction, using the Eclipse JPA feature

o       In module 1.1, you installed Derby and created a new Liberty server

o       In module 2.1, you will configure the Liberty server with Derby for JPA

o       In this module, you will learn how to configure the Liberty profile to develop an application with database interaction, using the eclipse JPA feature

o       In module 1.2, you created the database and a new Liberty server

o       In module 2.2, you will configure the Liberty server with DB2 for JPA

o       In module 1.1 you installed Derby and created a server, and in module 2.1 you configured the server with JPA and JDBC

o       In module 1.2 you set up the runtime and the database, and in module 2.2 you configured the server

o       In module 3.2 you will create the JPA project and generate a JPA class that matches the database structure

o       In module 1.1 you installed Derby and created the server, in module 2.1 you configured the server, and in module 3.1 you created the application project and generated the JPA Account class

o       In module 4.1 you will add the application code for the DBInteractions class

o       In module 1.2 you created the database and runtime server, in module 2.2 you configured the server, and in module 3.2 you created the application project and generated the Account class

o       In module 4.2 you will add the application code for the DBInteractions class

o       In module 1.1 you set up the database and the runtime, in module 2.1 you configured the server, and in module 3.1 you created the application project and a JPA class

o       In module 4.1 you added the DBInteractions servlet

o       In module 5.1 you will add another servlet with business logic

 

o       In module 1.2 you set up the database and the runtime, in module 2.2 you configured the server, and in module 3.2 you created the application project and a JPA class

o       In module 4.2 you added the DBInteractions servlet

o       In module 5.2 you will add another servlet with business logic

o       In module 1.1 you installed Derby and created a server, in module 2.1 you configured the server, and in module 3.1 you created the application project and a database class

o       In modules 4.1 and 5 you added the DBInteraction and CustomerCredit servlets

o       In module 6.1 you will configure two files

  • index.html provides the browser interface for your web application
  • persistence.xml provides database configuration information

o       You will also deploy and test your application

 

o       In module 1.2 you set up the runtime and the database, in module 2.2 you configured the server, and in module 3.2 you created the application project and a database class

o       In modules 4.2 and 5.2 you added the DBInteraction and CustomerCredit servlets

o       In module 6.2 you will add two files

  • index.html provides the browser interface for your web application
  • persistence.xml provides database configuration information

o       You will also deploy and test your application

[repost ]Installing Couchdb on RHEL 5

original:http://wiki.apache.org/couchdb/Installing_on_RHEL5

These instructions also work on Red Hat Enterprise Linux compatible distributions like CentOS.

Note: COUCHDB-315 has an attached patch for the CouchDB README which adds instructions for RHEL 5.

Installing a prepackaged CouchDB

1. Enable the EPEL repository.

2. Install the couchdb package from EPEL:

# yum install couchdb

3. Edit config file to suit:

# vi /etc/couchdb/local.ini

4. Start CouchDB:

# service couchdb start

5. Set it to start automatically on reboots:

# chkconfig --level 345 couchdb on

Building CouchDB from source (with EPEL packages)

1. Install prerequisites. You will need to enable the EPEL repository for the js-devel and erlang packages (or build js and erlang from source). On AWS Linux, edit /etc/yum.repos.d/epel.repo and inside the [epel]segment, change enabled=1.

# yum install libicu-devel openssl-devel curl-devel make gcc erlang js-devel libtool which

1.1 If installing CouchDB >= 0.11, you will need cURL>= 7.18. Currently neither EPEL, nor IUS provide a current enough libcurl. Visit the curl download page for the most recent curl package.

$ wget http://curl.haxx.se/download/curl-7.20.1.tar.gz
$ tar -xzf curl-7.20.1.tar.gz
$ cd curl-7.20.1
$ ./configure --prefix=/usr/local
$ make
$ make test
# make install

2. Install CouchDB

The configure line below is for 64-bit, adjust for your arch (or leave out –with-erlang if configure can find out for itself). You can use a release tarball instead of a checkout, in that case skip right to the ./confgure line.

$ svn checkout http://svn.apache.org/repos/asf/couchdb/trunk couchdb
$ cd couchdb
$ ./bootstrap
$ ./configure --with-erlang=/usr/lib64/erlang/usr/include
$ make
# make install

3. Edit config file to suit

# vi /usr/local/etc/couchdb/local.ini

4. Create user, modify ownership and permissions

Create the couchdb user:

# adduser -r --home /usr/local/var/lib/couchdb -M --shell /bin/bash --comment "CouchDB Administrator" couchdb

See the README for additional chown and chmod commands to run.

4.1 fix permission

chown -R couchdb: /usr/local/var/lib/couchdb /usr/local/var/log/couchdb

5. Launch!

# sudo -u couchdb couchdb

Or as daemon:

# /usr/local/etc/rc.d/couchdb start

6. Run as daemon on start-up:

# ln -s /usr/local/etc/rc.d/couchdb /etc/init.d/couchdb
# chkconfig --add couchdb
# chkconfig --level 345 couchdb on

Building CouchDB from source (with standard packages only)

Tested with 64-bit CentOS 5.6. Replace “/opt/couchdb” with a directory of your choice.

1. Install prerequisites (standard packages only, no additional repositories).

# yum install gcc libtool xulrunner-devel libicu-devel openssl-devel

2. Build Erlang from otp_src_R14B.tar.gz:

$ ./configure --prefix=/opt/couchdb/erlang --without-termcap --without-javac --enable-smp-support --disable-hipe
$ make
# make install

3. Build Curl from curl-7.21.6.tar.gz:

$ ./configure --prefix=/opt/couchdb/curl
$ make
# make install

4. Build CouchDB from apache-couchdb-1.0.2.tar.gz

$ ERL=/opt/couchdb/erlang/bin/erl ERLC=/opt/couchdb/erlang/bin/erlc CURL_CONFIG=/opt/couchdb/curl/bin/curl-config LDFLAGS=-L/opt/couchdb/curl/lib ./configure --prefix=/opt/couchdb/couchdb --with-erlang=/opt/couchdb/erlang/lib/erlang/usr/include/ --with-js-include=/usr/include/xulrunner-sdk-1.9.2/ --with-js-lib=/usr/lib64/xulrunner-sdk-1.9.2/lib
$ make
# make install

Tip: mind the firewall

It’s very likely that the default installation of a Red Hat system has the firewall turned on. This can be verified by issuing:

# service iptables status

If it is active then it will list the rules, otherwise you’ll get an unrecognized service error message. The default firewall configuration on such system resides in /etc/sysconfig/iptables (and if you’re using ipv6 then /etc/sysconfig/ip6tables). In this case just insert a rule for CouchDB before the REJECT rule. By default, the rules should look like the following (already added the CouchDB rule):

*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:RH-Firewall-1-INPUT - [0:0]
-A INPUT -j RH-Firewall-1-INPUT
-A FORWARD -j RH-Firewall-1-INPUT
-A RH-Firewall-1-INPUT -i lo -j ACCEPT
-A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT
-A RH-Firewall-1-INPUT -p udp --dport 5353 -d 224.0.0.251 -j ACCEPT
-A RH-Firewall-1-INPUT -p udp -m udp --dport 53 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 53 -j ACCEPT
### The following rule allows CouchDB connections from everywhere ###
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 5984 -j ACCEPT
-A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited
COMMIT

Similarly, the firewall could be active also on CentOS systems. The file is still the same (/etc/sysconfig/iptables) but the default rules change a bit. Also in this case insert the rule for CouchDB before the REJECT.

-A INPUT -p tcp --dport 5984 -j ACCEPT

In both cases, don’t forget to restart the iptables service

# service iptables restart

[repost ]How To Install CouchDB from Source on a CentOS 6 x64 VPS

original:https://www.digitalocean.com/community/tutorials/how-to-install-couchdb-from-source-on-a-centos-6-x64-vps

Introduction

CouchDB is a NoSQL database developed by The Apache Software Foundation that uses JSON for documents, JavaScript for MapReduce queries, and regular HTTP for an API. Often called “a database that completely embraces the web,” it’s used by many startups as well as corporations due to its flexibility and scalability.

As of this tutorial, the current stable version of CouchDB is 1.4.0.

It’s recommended to complete the Initial Server Setup with CentOS 6 tutorial before starting this one.

Step 1 – Install the Build Tools on your VPS

In order to compile CouchDB from source, you need to install some tools and dependencies on your virtual server.

The first thing you need to do is update your packages to the latest version:

sudo yum -y update

Next, you have to install the Development Tools:

sudo yum -y groupinstall "Development Tools"

And the dependencies required to compile CouchDB: Erlang and SpiderMoney:

sudo yum -y install libicu-devel curl-devel ncurses-devel libtool libxslt fop java-1.6.0-openjdk java-1.6.0-openjdk-devel unixODBC unixODBC-devel openssl-devel

Step 2 – Installing Erlang

Erlang is required by CouchDB. The CentOS team is not offering any official packages, so you will have to compile it from source.

First, go to www.erlang.org/download.html and download the latest source code.

wget http://www.erlang.org/download/otp_src_R16B02.tar.gz

After your download is finished, unpack the archive:

tar -zxvf otp_src_R16B02.tar.gz

Now that we have the Erlang source code unpacked, we can start compiling it:

cd otp_src_R16B02
./configure && make

Next you’ll have to install it. By default, Erlang will be installed in /usr/local:

sudo make install

Step 3 – Installing the SpiderMonkey JS Engine

Mozilla’s SpiderMoney JavaScript Engine is required by CouchDB in order to successfully compile.

CouchDB requires Mozilla’s SpiderMoney version 1.8.5, which you can download from their FTP:

wget http://ftp.mozilla.org/pub/mozilla.org/js/js185-1.0.0.tar.gz

After the download is finished, unpack the archive:

tar -zxvf js185-1.0.0.tar.gz 

The next step is to compile and install it on your VPS:

cd js-1.8.5/js/src
./configure && make
sudo make install

Step 4 – Installing CouchDB

After all dependencies are satisfied, installing CouchDB is pretty straight forward.

First, you’ll have to download and unpack the CouchDB source:

wget http://apache.osuosl.org/couchdb/source/1.4.0/apache-couchdb-1.4.0.tar.gz
tar -zxvf apache-couchdb-1.4.0.tar.gz

After we have the source code unpacked, we can start compiling it. This should take just a few minutes:

cd apache-couchdb-1.4.0
./configure && make

If everything is fine, we are now ready to install CouchDB:

sudo make install

Step 5 – Setting up CouchDB

After CouchDB is installed, you have to create the CouchDB user, set the proper permissions and add the startup scripts.

Let’s start by adding the couchdb user:

sudo adduser --no-create-home couchdb

The couchdb user must to have the proper permissions to access a few directories:

sudo chown -R couchdb:couchdb /usr/local/var/lib/couchdb /usr/local/var/log/couchdb /usr/local/var/run/couchdb

Next, we’ll have to create a link for the couchdb init script to /etc/init.d:

sudo ln -sf /usr/local/etc/rc.d/couchdb /etc/init.d/couchdb

If you’d like CouchDB to start automatically at boot, add and enable the init script in chkconfig:

sudo chkconfig --add couchdb
sudo chkconfig couchdb on

By default, CouchDB can be accessed only from the VPS itself. If you’d like to access it from the web, you’ll have to change the configuration file.

Open the configuration file in an editor:

sudo nano /usr/local/etc/couchdb/local.ini

Should you need to access couchdb from the web, in the [httpd] section, look for a setting called bind_address and change it to 0.0.0.0 – this will make CouchDB bind all available addresses.

[httpd]
port = 5984
bind_address = 0.0.0.0

Now we’re ready to start CouchDB:

sudo service couchdb start

To verify that CouchDB is running, connect to it on port 5984:

curl http://localhost:5984

You should see a response like:

{"couchdb":"Welcome","uuid":"a9e7db070cfe85e6a770aa254c49c8c3","version":"1.4.0","vendor":{"name":"The Apache Software Foundation","version":"1.4.0"}}

After you confirm that your server is up and running, you can access it in a browser at http://your.DO.IP.address:5984/_utils.

[repost ]Couchdb Installation on Unix-like systems

original:http://docs.couchdb.org/en/latest/install/unix.html

2.1. Installation on Unix-like systems

A high-level guide to Unix-like systems, inc. Mac OS X and Ubuntu.

This document is the canonical source of installation information. However, many systems have gotchas that you need to be aware of. In addition, dependencies frequently change as distributions update their archives. If you’re running into trouble, be sure to check out the wiki. If you have any tips to share, please also update the wiki so that others can benefit from your experience.

2.1.1. Troubleshooting

Please work through these in order if you experience any problems.

2.1.2. Dependencies

You should have the following installed:

It is recommended that you install Erlang OTP R13B-4 or above where possible. You will only need libcurl if you plan to run the JavaScript test suite. And help2man is only need if you plan on installing the CouchDB man pages. Python and Sphinx are only required for building the online documentation.

Debian-based Systems

You can install the dependencies by running:

sudo apt-get install build-essential
sudo apt-get install erlang-base-hipe
sudo apt-get install erlang-dev
sudo apt-get install erlang-manpages
sudo apt-get install erlang-eunit
sudo apt-get install erlang-nox
sudo apt-get install libicu-dev
sudo apt-get install libmozjs-dev
sudo apt-get install libcurl4-openssl-dev

There are lots of Erlang packages. If there is a problem with your install, try a different mix. There is more information on the wiki. Additionally, you might want to install some of the optional Erlang tools which may also be useful.

Be sure to update the version numbers to match your system’s available packages.

Unfortunately, it seems that installing dependencies on Ubuntu is troublesome.

RedHat-based (Fedora, Centos, RHEL) Systems

You can install the dependencies by running:

sudo yum install autoconf
sudo yum install autoconf-archive
sudo yum install automake
sudo yum install curl-devel
sudo yum install erlang-asn1
sudo yum install erlang-erts
sudo yum install erlang-eunit
sudo yum install erlang-os_mon
sudo yum install erlang-xmerl
sudo yum install help2man
sudo yum install js-devel
sudo yum install libicu-devel
sudo yum install libtool
sudo yum install perl-Test-Harness

While CouchDB builds against the default js-devel-1.7.0 included in some distributions, it’s recommended to use a more recent js-devel-1.8.5.

Mac OS X

Follow Installation with HomeBrew reference till brew install couchdb step.

2.1.3. Installing

Once you have satisfied the dependencies you should run:

./configure

This script will configure CouchDB to be installed into /usr/local by default.

If you wish to customise the installation, pass –help to this script.

If everything was successful you should see the following message:

You have configured Apache CouchDB, time to relax.

Relax.

To install CouchDB you should run:

make && sudo make install

You only need to use sudo if you’re installing into a system directory.

Try gmake if make is giving you any problems.

If everything was successful you should see the following message:

You have installed Apache CouchDB, time to relax.

Relax.

2.1.4. First Run

You can start the CouchDB server by running:

sudo -i -u couchdb couchdb

This uses the sudo command to run the couchdb command as the couchdb user.

When CouchDB starts it should eventually display the following message:

Apache CouchDB has started, time to relax.

Relax.

To check that everything has worked, point your web browser to:

http://127.0.0.1:5984/_utils/index.html

From here you should verify your installation by pointing your web browser to:

http://localhost:5984/_utils/verify_install.html

2.1.5. Security Considerations

You should create a special couchdb user for CouchDB.

On many Unix-like systems you can run:

adduser --system \
        --home /usr/local/var/lib/couchdb \
        --no-create-home \
        --shell /bin/bash \
        --group --gecos \
        "CouchDB Administrator" couchdb

On Mac OS X you can use the Workgroup Manager to create users.

You must make sure that:

  • The user has a working POSIX shell
  • The user’s home directory is /usr/local/var/lib/couchdb

You can test this by:

  • Trying to log in as the couchdb user
  • Running pwd and checking the present working directory

Change the ownership of the CouchDB directories by running:

chown -R couchdb:couchdb /usr/local/etc/couchdb
chown -R couchdb:couchdb /usr/local/var/lib/couchdb
chown -R couchdb:couchdb /usr/local/var/log/couchdb
chown -R couchdb:couchdb /usr/local/var/run/couchdb

Change the permission of the CouchDB directories by running:

chmod 0770 /usr/local/etc/couchdb
chmod 0770 /usr/local/var/lib/couchdb
chmod 0770 /usr/local/var/log/couchdb
chmod 0770 /usr/local/var/run/couchdb

2.1.6. Running as a Daemon

SysV/BSD-style Systems

You can use the couchdb init script to control the CouchDB daemon.

On SysV-style systems, the init script will be installed into:

/usr/local/etc/init.d

On BSD-style systems, the init script will be installed into:

/usr/local/etc/rc.d

We use the [init.d|rc.d] notation to refer to both of these directories.

You can control the CouchDB daemon by running:

/usr/local/etc/[init.d|rc.d]/couchdb [start|stop|restart|status]

If you wish to configure how the init script works, you can edit:

/usr/local/etc/default/couchdb

Comment out the COUCHDB_USER setting if you’re running as a non-superuser.

To start the daemon on boot, copy the init script to:

/etc/[init.d|rc.d]

You should then configure your system to run the init script automatically.

You may be able to run:

sudo update-rc.d couchdb defaults

If this fails, consult your system documentation for more information.

A logrotate configuration is installed into:

/usr/local/etc/logrotate.d/couchdb

Consult your logrotate documentation for more information.

It is critical that the CouchDB logs are rotated so as not to fill your disk.