Testing your REST api in Go with forest

Go package forest was created to simplify the code needed to write real tests of a REST api. Such tests need to setup Http Requests, send them and inspect Http Responses. The Go standard library has all that is needed to achieve this flow but the required amount of code, especially the error handling, makes tests harder to read and longer to write.


Testing github

Let’s create a few tests that examine the response of the public github API call.

package main

import (

	. "github.com/emicklei/forest"

var github = NewClient("https://api.github.com", new(http.Client))

func TestForestProjectExists(t *testing.T) {
	cfg := NewConfig("/repos/emicklei/{repo}", "forest").Header("Accept", "application/json")
	r := github.GET(t, cfg)
	ExpectStatus(t, r, 200)

The package variable github (line 10) is a forest Http client wrapper and provides the api to send Http requests (such as GET). On line 13, the cfg variable is a RequestConfig value that is used to build a http.Request with method,url,headers,body. The variable r is a *http.Response. The ExpectStatus function checks the status code. If the code does not match, then the following will be reported.


When it comes to verifying the response payload, the forest package offers several functions to inspect JSON and XML documents. All functions operate on the *http.Response and take the *testing.T for reporting errors.

Now we extend the test to inspect the response document. Follow https://api.github.com/repos/emicklei/forest to see the document in your browser.

	type Repository struct {
		Owner struct {
			HtmlUrl string `json:"html_url"`
		} `json:"owner"`
	var repo Repository

	ExpectJSONDocument(t, r, &repo)

	if got, want := repo.Owner.HtmlUrl, "https://github.com/emicklei"; got != want {
		t.Errorf("got %v want %v", got, want)

Line 24 uses the ExpectJSONDocument function that takes the Http response r and tries to unmarshal the response content into a struct.

If only you need to inspect a specific document element, use the JSONPath function instead.

	if got, want := JSONPath(t, r, ".owner.html_url"), "https://github.com/emicklei"; got != want {
		t.Errorf("got %v want %v", got, want)

Other functions exist in forest that marshal documents into structs, use templating to create Request bodies, navigate XML or JSON document using XPath and JSONPath.
In contrast to the standard behavior, the Body of a http.Response is made re-readable. This means one can apply multiple expectations to a response as well as dump the full contents.

func ExpectHeader(t T, r *http.Response, name, value string)
func ExpectJSONArray(t T, r *http.Response, callback func(array []interface{}))
func ExpectJSONDocument(t T, r *http.Response, doc interface{})
func ExpectJSONHash(t T, r *http.Response, callback func(hash map[string]interface{}))
func ExpectStatus(t T, r *http.Response, status int) bool
func ExpectString(t T, r *http.Response, callback func(content string))
func ExpectXMLDocument(t T, r *http.Response, doc interface{})
func JSONArrayPath(t T, r *http.Response, dottedPath string) interface{}
func JSONPath(t T, r *http.Response, dottedPath string) interface{}
func ProcessTemplate(t T, templateContent string, value interface{}) string
func Scolorf(syntaxCode string, format string, args ...interface{}) string
func SkipUnless(s skippeable, labels ...string)
func XMLPath(t T, r *http.Response, xpath string) interface{}
func Dump(t T, resp *http.Response)


In our company, many teams have build microservices and created all kinds of tests for them using different tools (jmeter,fitnesse,gatling,…) and frameworks (JUnit,Spec,Cucumber,…). For the services in our team, we started out with JMeter as we needed performance tests. We did write full API tests in JMeter as well, but soon we realized that we could write and run tests faster in Go, the language we also use to implement the services. Abstractions with types and functions really helps to write a large collection of maintainable tests. And because of the speed of compilation and execution, we now run these API tests as frequently as we run the unit tests of our services.

Get started

See the full documentation on Godoc. This open-source project is available at https://github.com/emicklei/forest. MIT License

Posted in Go

Artreyu, an artifact assembly tool

Artreyu is an open-source command line tool for software build pipelines that need to create artifacts which are composed of multiple versioned build parts. The design of this tool is inspired by the Apache Maven project which provides assembly support for Java projects. I created Artreyu to support a project that delivers compositions of many non-Java artifacts of which some are operating system dependent.

Artreyu works with a repository to archive, fetch and assemble artifacts. To support remote repositories such as Artifactory and Nexus, Artreyu has a simple plugin architecture ; each plugin is just a binary that gets invoked by the artreyu tool. Sources for the nexus plugin is on github.com/emicklei/artreyu-nexus.

Artreyu uses a simple descriptor file to specify group and version information.

api: 1

group: 		com.company
artifact: 	awesometool
version: 	1.0-SNAPSHOT
type: 		tgz

Artifacts are stored using a directory layout similar to what Maven Central uses.


Notice the $osname in the path that allows for storing platform specific artifacts. For independent artifacts, the any-os label is used. For assembling composite artifacts, the descriptor can have a list of parts.

api: 1

group: com.company
artifact: assembly
version: 2
type: tgz

- group: com.company
  artifact: awesometool
  version: 1.0-SNAPSHOT
  type: tgz

- group: com.company
  artifact: html-documentation
  version: 2.0
  any-os: true
  type: tgz

When running the assemble command, the parts are downloaded to a temporary directory, the parts are extracted (if compressed), all content is compressed again (if the assembly type indicates that) into a new artifact and is archived back into the repository.

Available on github.com/emicklei/artreyu. MIT License.

Line scanning in Go

Today, I needed to keep track of the linenumber while scanning table driven tests. The standard Go bufio.Scanner does not provide such information. Fortunately, in Go you can create your own by embedding the standard type and overriding the right function. That’s it.

import (

type linescanner struct {
	line int

func newScanner(reader io.Reader) *linescanner {
	return &linescanner{bufio.NewScanner(reader), 0}

func (l *linescanner) Scan() bool {
	ok := l.Scanner.Scan()
	if ok {
	return ok

Guava-like EventBus for Go

I needed a simple solution to notify components about changes in some global settings.
In my Java days, I have been using the Guava Eventbus to solve similar problems, so I decided to cook something that does the basics.

From Guava: “The EventBus allows publish-subscribe-style communication between components requiring the components to explicitly register with one another (and thus be aware of each other).”

The Java implementation uses annotations to register components and subscribe its methods to event types. In my Go version, with lack of such annotations, I decided to keep it simpler; just subscribe a function to an event by name. The singleton EventBus value will be an exposed package variable.


	bus := NewEventBus()
	bus.Subscribe("test", func(e Event) {
		fmt.Printf("%#v\n", e)
	bus.Post("test", EventData{"pi": 3.14159})

The code

package lang

// EventData represents the context of an event, a simple map.
type EventData map[string]interface{}

// Event is send on the bus to all subscribed listeners
type Event struct {
	Name string
	Data EventData

// EventListener is the signature of functions that can handle an Event.
type EventListener func(Event)

// The EventBus allows publish-subscribe-style communication between components
// without requiring the components to explicitly register with one another (and thus be aware of each other)
// Inspired by Guava EventBus ; this is a more lightweight implementation.
type EventBus struct {
	listeners map[string][]EventListener

func NewEventBus() EventBus {
	return EventBus{map[string][]EventListener{}}

// Subscribe adds an EventListener to be called when an event is posted.
func (e EventBus) Subscribe(name string, listener EventListener) {
	list, ok := e.listeners[name]
	if !ok {
		list = []EventListener{}
	list = append(list, listener)
	e.listeners[name] = list

// Post sends an event to all subscribed listeners.
// Parameter data is optional ; Post can only have one map parameter.
func (e EventBus) Post(name string, data ...map[string]interface{}) {
	list, ok := e.listeners[name]
	if !ok {
	event := Event{Name: name}
	if len(data) == 1 {
		event.Data = EventData(data[0])
	for _, each := range list[:] { // iterate over unmodifyable copy

Smalltalk collect on Go slice of int

package main

import "fmt"

type intSlice []int

func (i intSlice) collect(block func(i int) int) intSlice {
	r := make(intSlice, len(i))
	for j, v := range i {
		r[j] = block(v)
	return r

func main() {
	numbers := intSlice{1, 2, 3}
	squared := func(i int) int { return i * i }

	fmt.Printf("%v", numbers.collect(squared))

Try it.

Implementations of select,inject,detect are left as an excercise for the reader.

Javascript CLI in Go using Otto

Today, I was playing with Otto and hacked together a small command-line-interface program in Go.

package main

import (


var Otto = otto.New()

func main() {

func dispatch(entry string) string {
	if len(entry) == 0 {
		return entry
	value, err := Otto.Run(entry)
	if err != nil {
		return err.Error()
	} else {
		return fmt.Sprintf("%v", value)

func loop() {
	for {
		fmt.Print("> ")
		in := bufio.NewReader(os.Stdin)
		entered, err := in.ReadString('\n')
		if err != nil {
		entry := strings.TrimLeft(entered[:len(entered)-1], "\t ") // without tabs,spaces and newline
		output := dispatch(entry)
		if len(output) > 0 {

One cool feature of Otto is the ability to expose your own Go functions in the Javascript namespace.
This example below adds the function log to print its arguments using the standard log package.

	Otto.Set("log", func(call otto.FunctionCall) otto.Value {
		return toValue("")

With this setup, I can call this function on the command line like this

> log("hi")
> hi

From multiple to single-value context in Go

These are the results of exploring one aspect of the Go language; functions can return multiple values.
Consider the following simple function:

func ab() (a,b int) {
	a,b = 1,2

Clearly, this function cannot be used in other function calls that expect a single value.


The compiler will help you: “multiple-value ab() in single-value context”.

Now suppose you want to take only the first value without introducing an intermediate variable.
One possible solution might be to create this helper function:

func first(args ...interface{})interface{} {
	return args[0]

Such that you can write:


But what if you want to generalize this such that you won’t end up with functions like second(),third(),…
To achieve this, I tried the following:

func pick(index int, args ...interface{})interface{} {
	return args[index]

However, again the compiler prompts with the same error message: “multiple-value ab() in single-value context”.
It is my interpretation that the compiler (in this case) cannot implicitly convert the multiple returns into a slice because of the extra function parameter.

So instead I created this slice converter function:

func slice(args ...interface{}) []interface{} {
	return args

And with that, you can write:


A case of sizing and draining buffered Go channels

Update: Based on several comments posted on the G+ Go community, I rewrote the solution presented here. See past the Conclusion section.

Recently, I was asked to write a small program for collecting product promotion information for a large collection of products. Given an input file of product identifiers, the following tasks had to be done:

1. read all identifiers from an input file
2. for each identifier, fetch product offer information in XML
3. parse the offer XML into in a DOM document
4. extract all promotions from the offer document
5. for each promotion, export its data into a CSV record

A straightforward implementation is to write a function for each task and call them in a nested loop. For this program, I created functions such that the basic flow is quite simple.

xml := fetchSellingOfferXml(id)
doc := parseStringXml(xml)
promos := extractPromotions(doc)

Next, I created the loops (reading identifiers and iterating the promotions) and the program was completed.

However, as predicted, some of these tasks took relative more time to execute. For getting product offer information, a Http request was needed to access a remote WebService.
The next task, parsing the response in XML representing an offer with promotion information, is also taking some time. Finally, given the requirement of processing 50.000 identifiers, I decided to use Go concurrency here.

Concurreny to the rescue

Again, a straightforward implementation of concurreny in Go is to use goroutines and channels. Channels will be used to communicate input and output data between functions which are called inside goroutines.

chaining functions with channels

Because one input (an identifier) can result in zero (no offer) or more promotions and the observation that some functions take more time then others, I decided to use buffered channels. To balance the time between waiting for fetching an offer XML and processing it, I expected to use multiple goroutines for fetching.

The following channels are initialized:

id_channel := make(chan string, CHAN_CAP)
xml_channel := make(chan string, CHAN_CAP)
offer_channel := make(chan *dom.Document, CHAN_CAP)
promotion_channel := make(chan *promotion, CHAN_CAP)

Next, I created new functions to wrap each task function into a loop. For example, the fetchXmlLoop will block read on id-channel. After receiving an identifier, it will call fetchSellingOfferXml. Unless the result is empty (no offer available) the result is send on the xml-channel. Now, my program can setup the chain of channels by spawning goroutines for each loop.

go fetchXmlLoop(id_channel, xml_channel)
go parseXmlLoop(xml_channel, offer_channel)
go extractPromotionsLoop(offer_channel, promotion_channel)
go writePromotionsLoop(promotion_channel, quit_channel)

Real challenges

Up to this point the story is pretty straightforward and easy to understand without getting into details. Now I will discuss the real challenges for this program because it is not complete.


At one point in time, the program has to terminate gracefully. All identifiers have to be processed and as a result all available promotions have to be exported. Because the program uses multiple buffered channels, the program can only terminate if all channels are drained ; no data can be left unconsumed.

The simple method of counting cannot be used here; for a given identifier there might not be offer information and if such offer information is available then there can be zero or more promotions available.
Another method would be polling the length of each channel. Again this method cannot be used here as detecting an empty xml-channel does not imply there is nothing more to receive in the future.

Fortunately, I found a solution which involves the use of a termination value for each channel type. This termination value is sent through the chain of channels “notifying” each goroutine loop to discontinue. After sending a termination value on the next channel it can stop.

For this program, after reading all product identifiers from the file, the terminating identifier (“eof”) is sent on the id-channel. The fetchXmlLoop will receive this value and sends the termination Xml string (“eof”) on the xml-channel. For each channel datatype, a termination value is defined.

no_more_ids = "eof"
no_more_xml = "eof"
no_more_offers = new(dom.Document)
no_more_promotions = new(promotion)

The fetchXmlLoop tests each received identifier against the termination value and acts accordingly.

	func fetchXmlLoop(id_channel chan string, xml_channel chan string) {
		for {
			id := <-id_channel
			if no_more_ids == id {
				xml_channel <- no_more_xml
			} else {
				xml := fetchSellingOfferXml(id)
				if xml != "" { // happens if no offer is found
					xml_channel <- xml

This pattern has been applied to all loop functions as listed above. There is one caveat though. The thread (or goroutine) that runs the main function cannot exit until the termination value has been received by the last channel in the chain. To synchronize this, I introduced the non-buffered quit-channel. After reading all identifiers from the input file, a boolean value is blocked sent on the quit-channel. When the termination value of the promotion-channel (in the writePromotionsLoop) is received, it takes the boolean value from the quit-channel. Now, the program is terminated after all work is done.

	// start reading and feeding the chain
	scanner := bufio.NewScanner(fileIn)
	for scanner.Scan() {
		id_channel <- scanner.Text()
	// send terminating identifier
	id_channel <- no_more_ids

	// and wait for the last channel to be drained
	quit_channel <- true


The concurrent program now uses multiple buffered channels and can create multiple goroutines each executing a loop. In general, each channel must be given a capacity and for each loop I need to decide how many goroutines could run in parallel. Even for this simple program, there are potentially 4 + 4 variables to size.

A practical way to find these values, such that the program runs as fast as possible, is to measure the use of each channel. Setting the channel capacity too low, then the sending goroutines will have to wait. Creating too many goroutines for one loop function, then the channel may not provide information fast enough to process.

For this purpose, I created a simple ChannelsObserver object. It runs a goroutine that periodically inspects the length of all registered channels and logs basic statistical (min,man,average,..) information. To compute the statistics, I am using the go-metrics package. Because in Go it is not possible to store different typed channels as values in a single map, I defined the ChannelLengthReporter type.

type ChannelLengthReporter func() int

Now, each channel can be registered on the ChannelsObserver using a closure.

observer := ChannelsObserver{}
observer.Register("id", func() int { return len(id_channel) })
observer.Register("xml", func() int { return len(xml_channel) })
observer.Register("offer", func() int { return len(offer_channel) })
observer.Register("promotion", func() int { return len(promotion_channel) })
timeBetweenLogEntry, _ := time.ParseDuration("5s")

In the loop for reading identifiers from the input file, I added the observer.Observe() call to measure the current status of all channels.
The obsever will log the metrics of all its channels every 5 seconds.

Channel Length Observer

	type ChannelsObserver struct {
		reporters  map[string]ChannelLengthReporter
		histograms map[string]metrics.Histogram

	func (c *ChannelsObserver) Register(name string, reporter ChannelLengthReporter) {
		if c.reporters == nil {
			// lazy init
			c.reporters = make(map[string]ChannelLengthReporter)
			c.histograms = make(map[string]metrics.Histogram)
		c.reporters[name] = reporter
		sample := metrics.NewUniformSample(100)
		c.histograms[name] = metrics.NewHistogram(sample)
		metrics.Register(name, c.histograms[name])

	func (c ChannelsObserver) Observe() {
		for name, reporter := range c.reporters {

	func (c ChannelsObserver) Run(logEvery time.Duration) {
		go metrics.Log(metrics.DefaultRegistry, logEvery, log.New(os.Stderr, "metrics: ", log.Lmicroseconds))


Using goroutines and channels, two of the Go concurrency features, I was able to apply concurrency to this program. Because the program uses buffered channels to decouple from time-varying processing tasks, correctly draining the channels was solved by passing “termination” data. Finally, to size the capacity of each channel and to decide how many goroutines per processing tasks is needed, measurement of channel usage is a valid method.

Update from feedback

First, given the solution described earlier and its measurement results, I observed that only the fetching process is taking significantly more time that any other processing. This is simply due to the network latency involved. Consequently, all but one channel remained empty. The id_channel was always saturated. Using buffering for the channels is not a right solution here.

Second, studying the comments added by several Gophers on the Go Community post, I decided to re-implemented the solution using a pattern suggested and prototyped by Bryan Mills. Basically his advice is “You don’t need buffers for any of these: the goroutines themselves form all the buffer you need”. Using the standard WaitGroup synchronization types, this solution spawns goroutines for the different stages (fetch,parse,export) and make them terminate and close the channels gracefully. I put in extra logging to view the actual flow ; see below the code for an example output.

As can be seen from the code, no buffered channels are used. Each stage is implemented using the same pattern for which a WaitGroup is created, goroutines are created and the cleanup is done using a deferred function call. All channels are consumed using the range operator.

	// spawn 1 goroutine to export promotions
	var promoWG sync.WaitGroup
	defer func() {
		glog.V(1).Info("waiting for the promo group")
	promos := make(chan *promotion)
	go func() {
		defer func() {
			glog.V(1).Infof("promo writer done")
		for promo := range promos {

	// waitgroup to synchronize finished goroutines for parsing
	var parseWG sync.WaitGroup
	defer func() {
		glog.V(1).Info("waiting for the parse group")

	// spawn NumCPU goroutines to process the XML documents
	xml := make(chan string)
	for i := 0; i < runtime.NumCPU(); i++ {
		go func(parser int) {
			defer func(parser int) {
				glog.V(1).Infof("parser %d done", parser)
			for each := range xml {
				doc, err := dom.ParseStringXml(each)
				if err == nil {
					extractPromotions(doc, promos)
				} else {
					// log it

	// waitgroup to synchronize finished goroutines for fetching
	var fetchWG sync.WaitGroup
	defer func() {
		glog.V(1).Info("waiting for the fetch group")

	// spawn FETCHER_CAP goroutines to process each read identifier
	nFetchers := 0
	scanner := bufio.NewScanner(fileIn)
	id_channel := make(chan string)
	for scanner.Scan() {
		if nFetchers < FETCHER_CAP {
			go func() {
				defer func(fetcher int) {
					glog.V(1).Infof("fetcher %d done", fetcher)
				for id := range id_channel {
					xml <- fetchSellingOfferXml(id)
		id_channel <- scanner.Text()
	glog.V(1).Info("done scanning")
I1015 11:09:38.632012 06942 getpromos_waitgroup.go:190] fetchSellingOffer: http://offer.services.xxxxx/slo/products/9200000019409664/offer
I1015 11:09:38.632309 06942 getpromos_waitgroup.go:190] fetchSellingOffer: http://offer.services.xxxxx/slo/products/9000000011756074/offer
I1015 11:09:38.632505 06942 getpromos_waitgroup.go:190] fetchSellingOffer: http://offer.services.xxxxx/slo/products/9200000011859583/offer
I1015 11:09:38.632543 06942 getpromos_waitgroup.go:123] done scanning
I1015 11:09:38.632551 06942 getpromos_waitgroup.go:97] waiting for the fetch group
I1015 11:09:38.694256 06942 getpromos_waitgroup.go:112] fetcher 1 done
I1015 11:09:38.694920 06942 getpromos_waitgroup.go:158] extractPromotions
I1015 11:09:38.694950 06942 getpromos_waitgroup.go:140] writePromotion
I1015 11:09:38.700120 06942 getpromos_waitgroup.go:112] fetcher 2 done
I1015 11:09:38.701317 06942 getpromos_waitgroup.go:158] extractPromotions
I1015 11:09:38.701367 06942 getpromos_waitgroup.go:140] writePromotion
I1015 11:09:38.701429 06942 getpromos_waitgroup.go:140] writePromotion
I1015 11:09:38.701535 06942 getpromos_waitgroup.go:112] fetcher 3 done
I1015 11:09:38.702002 06942 getpromos_waitgroup.go:158] extractPromotions
I1015 11:09:38.702019 06942 getpromos_waitgroup.go:69] waiting for the parse group
I1015 11:09:38.702029 06942 getpromos_waitgroup.go:140] writePromotion
I1015 11:09:38.702092 06942 getpromos_waitgroup.go:80] parser 3 done
I1015 11:09:38.702098 06942 getpromos_waitgroup.go:80] parser 4 done
I1015 11:09:38.702102 06942 getpromos_waitgroup.go:80] parser 5 done
I1015 11:09:38.702105 06942 getpromos_waitgroup.go:80] parser 6 done
I1015 11:09:38.702109 06942 getpromos_waitgroup.go:80] parser 7 done
I1015 11:09:38.702113 06942 getpromos_waitgroup.go:80] parser 0 done
I1015 11:09:38.702116 06942 getpromos_waitgroup.go:80] parser 1 done
I1015 11:09:38.702120 06942 getpromos_waitgroup.go:80] parser 2 done
I1015 11:09:38.702124 06942 getpromos_waitgroup.go:51] waiting for the promo group
I1015 11:09:38.702128 06942 getpromos_waitgroup.go:58] promo writer done

Hopwatch – a debugging tool for Go

Hopwatch is an experimental tool that can help debugging Go programs. Unlike most debuggers, hopwatch requires you to insert function calls at points of interest in your program. At these program locations, you can tell Hopwatch to display variable values and suspend the program (or goroutine). Hopwatch uses Websockets to exchange commands between your program and the debugger running in a HTML5 page.

Using Hopwatch

Basically, there are two functions Display and Break. Calling Display will print the variable,value pairs in the debugger page. Calling Break will suspend the program until you tell the debugger to Resume. The Display function takes one or more pairs of (variable name,value). The Break function takes one or more boolean values to be conditionally. Below is a test example.

package main

import "github.com/emicklei/hopwatch"

func main() {
	for i := 0; i < 6; i++ {
		j := i * i
		hopwatch.Display("i",i, "j", j ).Break(j > 10)

Starting your Go program that includes a Hopwatch function call waits for a connection to the debugger page.
You can disable Hopwatch by passing the command line parameters -hopwatch false.

2012/12/14 17:24:47 [hopwatch] open http://localhost:23456/hopwatch.html ...
2012/12/14 17:24:47 [hopwatch] no browser connection, wait for it ...

Once the debugger is connected, it can receive Display and Break command requests from your Go program. You can look at the stack trace, resume a suspended program or disconnect from your program.

Hopwatch debugger page

I made a short movie (1.5Mb .mov) that demonstrates using hopwatch in a multiple goroutines program. This example is included in the tests folder of the hopwatch package.

What’s next?

Creating this tool has been fun and educative. Hopwatch uses two channels to communicate between the sendLoop and receiveLoop goroutines and uses the go.net Websocket package to talk to Javascript in a browser. As a tool Hopwatch may be useful (perhaps for remote debugging?) but ofcourse it lacks the important features of a real debugger (GDB) such as setting breakpoints at runtime (and therefore without source modification).
If you have suggestions then please leave a comment on the announcement in the go-nuts discussion group.

The project is located at github ; documentation can also be found on godoc.org

Methods as objects in Go

In an earlier post, I discussed an example of using plain functions as objects. Today, I investigated solutions for using Methods as objects. As I understand it, Go methods are functions that are “bound” to a named type that is not a pointer or an interface.

package main

import "fmt"

type T struct {
	S string

func (t T) GetS1() string {
	return t.S

func (t *T) GetS2() string {
	return t.S

func main() {
	t := T{"hello"}
	f := (*T).GetS1
	g := (T).GetS1
	h := (*T).GetS2
	// i := (T).GetS2   // invalid method expression T.GetS2 (needs pointer receiver: (*T).GetS2)
	fmt.Printf("%v, %v %v, who is there?", f(&t) ,g(t), h(&t))

Run it yourself.

In the example, I defined a type T and the value method GetS1 and pointer method GetS2 both returning the value of field S. In the main() function, I create a T and three variables (f,g and h) to hold references to the methods GetS1 and GetS2. The statement for the variable i is commented ; this is an illegal construction. Finally, in the printing statement, I put the expressions to call the functions using the t parameter ; either using its address, or as a value.