# Policies¶

## GreedyPolicy¶

A `GreedyPolicy`

moves the agent in the direction that minimizes the expected entropy after moving.

```
GreedyPolicy(x::Vehicle, n::Int)
```

The integer `n`

denotes how many actions should be considered.
If `n=6`

, then the agent considers the expected entropy given 6 different directions, spaced an even 60 degrees apart.

## CirclePolicy¶

A `CirclePolicy`

moves the agent perpendicularly to the last recorded bearing measurement, which ends up drawing a circle around the source.
The constructor is as follows:

```
CirclePolicy()
```

The `CirclePolicy`

implicitly assumes that the sensor is of `BearingOnly`

type.

## Custom Policy¶

You can create your own policies by extending the abstract `Policy`

class and implementing the `action`

function. Below is an example. Remember that to extend `FEBOL`

’s `action`

function, you must import it instead of just relying on `using`

:

```
using FEBOL
import FEBOL.action
type CustomPolicy <: Policy
end
function action(m::SearchDomain, x::Vehicle, o::Float64, f::AbstractFilter, p::CustomPolicy)
# your policy code
# must return action (2-tuple of Float64s)
end
```

Feel free to take advantage of the `normalize`

function to ensure your action’s norm is equal to the maximum distance the vehicle can take per time step:

```
normalize(a::Action, x::Vehicle)
```